Test Report: Docker_Linux_containerd_arm64 17907

                    
                      7ea9a0daea14a922bd9e219098252b67b1b782a8:2024-01-08:32610
                    
                

Test fail (18/316)

x
+
TestAddons/parallel/Ingress (36.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-241374 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-241374 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-241374 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0892762c-a256-40cc-ba1c-7317ade56652] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0892762c-a256-40cc-ba1c-7317ade56652] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003793036s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-241374 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.056819916s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-241374 addons disable ingress-dns --alsologtostderr -v=1: (1.104297711s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-241374 addons disable ingress --alsologtostderr -v=1: (7.829411041s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-241374
helpers_test.go:235: (dbg) docker inspect addons-241374:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbccd28e34e28feb941ba8d2bb366faa17ce1ae4860012809ff67cbd3075c801",
	        "Created": "2024-01-08T20:10:56.285058757Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 655844,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:10:56.613059784Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/fbccd28e34e28feb941ba8d2bb366faa17ce1ae4860012809ff67cbd3075c801/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbccd28e34e28feb941ba8d2bb366faa17ce1ae4860012809ff67cbd3075c801/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbccd28e34e28feb941ba8d2bb366faa17ce1ae4860012809ff67cbd3075c801/hosts",
	        "LogPath": "/var/lib/docker/containers/fbccd28e34e28feb941ba8d2bb366faa17ce1ae4860012809ff67cbd3075c801/fbccd28e34e28feb941ba8d2bb366faa17ce1ae4860012809ff67cbd3075c801-json.log",
	        "Name": "/addons-241374",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-241374:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-241374",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/692487e28a7c18be280b8c3ea168b7aa230a3094bae351c9a59362d7c70f2caf-init/diff:/var/lib/docker/overlay2/5440a5a336c464ed564efc18a632104b770481b7cc483f7cadb6269a7b019538/diff",
	                "MergedDir": "/var/lib/docker/overlay2/692487e28a7c18be280b8c3ea168b7aa230a3094bae351c9a59362d7c70f2caf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/692487e28a7c18be280b8c3ea168b7aa230a3094bae351c9a59362d7c70f2caf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/692487e28a7c18be280b8c3ea168b7aa230a3094bae351c9a59362d7c70f2caf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-241374",
	                "Source": "/var/lib/docker/volumes/addons-241374/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-241374",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-241374",
	                "name.minikube.sigs.k8s.io": "addons-241374",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e14c0fced8c415749a9c6e38e89f8885ab78bc711fb9082cb3840a56df1dd158",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e14c0fced8c4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-241374": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fbccd28e34e2",
	                        "addons-241374"
	                    ],
	                    "NetworkID": "1ce91b58eba19689955851a2a94466e727d5f6f1ee1daa8c498c434fd8139772",
	                    "EndpointID": "648df701e8659ed3fc167c47ad5f26feccd12374d4182ef661440f31f4673125",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-241374 -n addons-241374
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-241374 logs -n 25: (1.60754333s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| delete  | -p download-only-896079                                                                     | download-only-896079   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| delete  | -p download-only-896079                                                                     | download-only-896079   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| start   | --download-only -p                                                                          | download-docker-787864 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | download-docker-787864                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-787864                                                                   | download-docker-787864 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-553324   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | binary-mirror-553324                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42199                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-553324                                                                     | binary-mirror-553324   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| addons  | disable dashboard -p                                                                        | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | addons-241374                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | addons-241374                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-241374 --wait=true                                                                | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:12 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-241374 ip                                                                            | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	| addons  | addons-241374 addons disable                                                                | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | -p addons-241374                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-241374 ssh cat                                                                       | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | /opt/local-path-provisioner/pvc-88f0a05f-751c-46df-9b15-285db5e558d1_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-241374 addons disable                                                                | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | addons-241374                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | -p addons-241374                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-241374 addons                                                                        | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-241374 addons                                                                        | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:14 UTC |
	|         | addons-241374                                                                               |                        |         |         |                     |                     |
	| addons  | addons-241374 addons                                                                        | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-241374 ssh curl -s                                                                   | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-241374 ip                                                                            | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	| addons  | addons-241374 addons disable                                                                | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-241374 addons disable                                                                | addons-241374          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:33.207961  655385 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:10:33.208977  655385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:33.208988  655385 out.go:309] Setting ErrFile to fd 2...
	I0108 20:10:33.209047  655385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:33.209418  655385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:10:33.210012  655385 out.go:303] Setting JSON to false
	I0108 20:10:33.210854  655385 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10374,"bootTime":1704734260,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:10:33.210948  655385 start.go:138] virtualization:  
	I0108 20:10:33.213775  655385 out.go:177] * [addons-241374] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:10:33.216416  655385 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:10:33.218349  655385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:10:33.216549  655385 notify.go:220] Checking for updates...
	I0108 20:10:33.222281  655385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:10:33.224332  655385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:10:33.226215  655385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:10:33.228154  655385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:10:33.230219  655385 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:10:33.254198  655385 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:10:33.254330  655385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:33.333767  655385 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:33.323940831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:33.333873  655385 docker.go:295] overlay module found
	I0108 20:10:33.342795  655385 out.go:177] * Using the docker driver based on user configuration
	I0108 20:10:33.345210  655385 start.go:298] selected driver: docker
	I0108 20:10:33.345230  655385 start.go:902] validating driver "docker" against <nil>
	I0108 20:10:33.345244  655385 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:10:33.345876  655385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:33.415464  655385 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:33.40584021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:33.415642  655385 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:10:33.415909  655385 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:10:33.417838  655385 out.go:177] * Using Docker driver with root privileges
	I0108 20:10:33.419726  655385 cni.go:84] Creating CNI manager for ""
	I0108 20:10:33.419746  655385 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:10:33.419759  655385 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:10:33.419775  655385 start_flags.go:323] config:
	{Name:addons-241374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-241374 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:33.421942  655385 out.go:177] * Starting control plane node addons-241374 in cluster addons-241374
	I0108 20:10:33.423824  655385 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:10:33.426032  655385 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:10:33.427835  655385 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:10:33.427887  655385 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0108 20:10:33.427900  655385 cache.go:56] Caching tarball of preloaded images
	I0108 20:10:33.427925  655385 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:10:33.427994  655385 preload.go:174] Found /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0108 20:10:33.428004  655385 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0108 20:10:33.428388  655385 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/config.json ...
	I0108 20:10:33.428420  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/config.json: {Name:mk21843f36e47dc701abc6690ed37074d6836c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:33.447435  655385 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:10:33.447543  655385 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:10:33.447562  655385 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:10:33.447570  655385 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:10:33.447579  655385 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:10:33.447584  655385 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I0108 20:10:49.397513  655385 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I0108 20:10:49.397559  655385 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:10:49.397624  655385 start.go:365] acquiring machines lock for addons-241374: {Name:mka3aca552bf2ca0cb5d7d7e0fa7038f421ed9b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:10:49.397744  655385 start.go:369] acquired machines lock for "addons-241374" in 93.587µs
	I0108 20:10:49.397772  655385 start.go:93] Provisioning new machine with config: &{Name:addons-241374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-241374 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:10:49.397850  655385 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:10:49.400564  655385 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0108 20:10:49.400837  655385 start.go:159] libmachine.API.Create for "addons-241374" (driver="docker")
	I0108 20:10:49.400869  655385 client.go:168] LocalClient.Create starting
	I0108 20:10:49.401049  655385 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem
	I0108 20:10:49.710492  655385 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem
	I0108 20:10:49.887386  655385 cli_runner.go:164] Run: docker network inspect addons-241374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:10:49.904114  655385 cli_runner.go:211] docker network inspect addons-241374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:10:49.904199  655385 network_create.go:281] running [docker network inspect addons-241374] to gather additional debugging logs...
	I0108 20:10:49.904221  655385 cli_runner.go:164] Run: docker network inspect addons-241374
	W0108 20:10:49.920288  655385 cli_runner.go:211] docker network inspect addons-241374 returned with exit code 1
	I0108 20:10:49.920324  655385 network_create.go:284] error running [docker network inspect addons-241374]: docker network inspect addons-241374: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-241374 not found
	I0108 20:10:49.920336  655385 network_create.go:286] output of [docker network inspect addons-241374]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-241374 not found
	
	** /stderr **
	I0108 20:10:49.920441  655385 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:10:49.937282  655385 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40027ff900}
	I0108 20:10:49.937324  655385 network_create.go:124] attempt to create docker network addons-241374 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 20:10:49.937384  655385 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-241374 addons-241374
	I0108 20:10:50.024149  655385 network_create.go:108] docker network addons-241374 192.168.49.0/24 created
	I0108 20:10:50.024186  655385 kic.go:121] calculated static IP "192.168.49.2" for the "addons-241374" container
	I0108 20:10:50.024286  655385 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:10:50.044144  655385 cli_runner.go:164] Run: docker volume create addons-241374 --label name.minikube.sigs.k8s.io=addons-241374 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:10:50.063681  655385 oci.go:103] Successfully created a docker volume addons-241374
	I0108 20:10:50.063778  655385 cli_runner.go:164] Run: docker run --rm --name addons-241374-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241374 --entrypoint /usr/bin/test -v addons-241374:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:10:51.982092  655385 cli_runner.go:217] Completed: docker run --rm --name addons-241374-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241374 --entrypoint /usr/bin/test -v addons-241374:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.918270854s)
	I0108 20:10:51.982121  655385 oci.go:107] Successfully prepared a docker volume addons-241374
	I0108 20:10:51.982166  655385 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:10:51.982187  655385 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:10:51.982272  655385 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-241374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:10:56.200742  655385 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-241374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.218426853s)
	I0108 20:10:56.200789  655385 kic.go:203] duration metric: took 4.218598 seconds to extract preloaded images to volume
	W0108 20:10:56.200923  655385 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:10:56.201092  655385 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:10:56.268007  655385 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-241374 --name addons-241374 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241374 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-241374 --network addons-241374 --ip 192.168.49.2 --volume addons-241374:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:10:56.622529  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Running}}
	I0108 20:10:56.650476  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:10:56.673324  655385 cli_runner.go:164] Run: docker exec addons-241374 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:10:56.732151  655385 oci.go:144] the created container "addons-241374" has a running status.
	I0108 20:10:56.732184  655385 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa...
	I0108 20:10:57.257231  655385 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:10:57.287823  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:10:57.313342  655385 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:10:57.313361  655385 kic_runner.go:114] Args: [docker exec --privileged addons-241374 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:10:57.395417  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:10:57.418301  655385 machine.go:88] provisioning docker machine ...
	I0108 20:10:57.418344  655385 ubuntu.go:169] provisioning hostname "addons-241374"
	I0108 20:10:57.418416  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:57.449534  655385 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:57.449979  655385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I0108 20:10:57.449993  655385 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-241374 && echo "addons-241374" | sudo tee /etc/hostname
	I0108 20:10:57.624380  655385 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-241374
	
	I0108 20:10:57.624493  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:57.649870  655385 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:57.650280  655385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I0108 20:10:57.650304  655385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-241374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-241374/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-241374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:10:57.794213  655385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:10:57.794256  655385 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-649468/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-649468/.minikube}
	I0108 20:10:57.794275  655385 ubuntu.go:177] setting up certificates
	I0108 20:10:57.794284  655385 provision.go:83] configureAuth start
	I0108 20:10:57.794345  655385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241374
	I0108 20:10:57.815004  655385 provision.go:138] copyHostCerts
	I0108 20:10:57.815079  655385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem (1078 bytes)
	I0108 20:10:57.815190  655385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem (1123 bytes)
	I0108 20:10:57.815286  655385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem (1679 bytes)
	I0108 20:10:57.815335  655385 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem org=jenkins.addons-241374 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-241374]
	I0108 20:10:58.301341  655385 provision.go:172] copyRemoteCerts
	I0108 20:10:58.301410  655385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:10:58.301451  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:58.323273  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:10:58.424033  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:10:58.452954  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 20:10:58.481577  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:10:58.510690  655385 provision.go:86] duration metric: configureAuth took 716.387873ms
	I0108 20:10:58.510717  655385 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:10:58.510917  655385 config.go:182] Loaded profile config "addons-241374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:10:58.510924  655385 machine.go:91] provisioned docker machine in 1.092602493s
	I0108 20:10:58.510930  655385 client.go:171] LocalClient.Create took 9.110055117s
	I0108 20:10:58.510943  655385 start.go:167] duration metric: libmachine.API.Create for "addons-241374" took 9.110107605s
	I0108 20:10:58.510951  655385 start.go:300] post-start starting for "addons-241374" (driver="docker")
	I0108 20:10:58.510959  655385 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:10:58.511012  655385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:10:58.511051  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:58.530814  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:10:58.632570  655385 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:10:58.636544  655385 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:10:58.636582  655385 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:10:58.636593  655385 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:10:58.636601  655385 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:10:58.636614  655385 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/addons for local assets ...
	I0108 20:10:58.636684  655385 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/files for local assets ...
	I0108 20:10:58.636710  655385 start.go:303] post-start completed in 125.75395ms
	I0108 20:10:58.637066  655385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241374
	I0108 20:10:58.654600  655385 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/config.json ...
	I0108 20:10:58.654884  655385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:10:58.654935  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:58.672215  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:10:58.766964  655385 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:10:58.772542  655385 start.go:128] duration metric: createHost completed in 9.374677341s
	I0108 20:10:58.772565  655385 start.go:83] releasing machines lock for "addons-241374", held for 9.37480977s
	I0108 20:10:58.772638  655385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241374
	I0108 20:10:58.790069  655385 ssh_runner.go:195] Run: cat /version.json
	I0108 20:10:58.790126  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:58.790400  655385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:10:58.790455  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:10:58.816097  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:10:58.817055  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:10:58.909446  655385 ssh_runner.go:195] Run: systemctl --version
	I0108 20:10:59.046389  655385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:10:59.052341  655385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 20:10:59.082599  655385 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:10:59.082692  655385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:10:59.117445  655385 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:10:59.117471  655385 start.go:475] detecting cgroup driver to use...
	I0108 20:10:59.117503  655385 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:10:59.117568  655385 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:10:59.131892  655385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:10:59.145572  655385 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:10:59.145646  655385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:10:59.161787  655385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:10:59.178416  655385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:10:59.276652  655385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:10:59.385597  655385 docker.go:233] disabling docker service ...
	I0108 20:10:59.385671  655385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:10:59.406790  655385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:10:59.420500  655385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:10:59.520659  655385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:10:59.620247  655385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:10:59.633710  655385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:10:59.653614  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 20:10:59.665152  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 20:10:59.677135  655385 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 20:10:59.677252  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 20:10:59.688927  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:10:59.700890  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 20:10:59.712527  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:10:59.724328  655385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:10:59.735435  655385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 20:10:59.748393  655385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:10:59.761210  655385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:10:59.771728  655385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:10:59.868754  655385 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:10:59.999865  655385 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:11:00.000014  655385 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:11:00.014600  655385 start.go:543] Will wait 60s for crictl version
	I0108 20:11:00.014688  655385 ssh_runner.go:195] Run: which crictl
	I0108 20:11:00.035908  655385 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:11:00.173092  655385 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0108 20:11:00.173256  655385 ssh_runner.go:195] Run: containerd --version
	I0108 20:11:00.279944  655385 ssh_runner.go:195] Run: containerd --version
	I0108 20:11:00.334206  655385 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0108 20:11:00.336258  655385 cli_runner.go:164] Run: docker network inspect addons-241374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:11:00.356661  655385 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:11:00.362517  655385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:11:00.378662  655385 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:11:00.378740  655385 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:11:00.428543  655385 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:11:00.428570  655385 containerd.go:518] Images already preloaded, skipping extraction
	I0108 20:11:00.428635  655385 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:11:00.471996  655385 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:11:00.472019  655385 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:11:00.472088  655385 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:11:00.514367  655385 cni.go:84] Creating CNI manager for ""
	I0108 20:11:00.514396  655385 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:11:00.514443  655385 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:11:00.514470  655385 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-241374 NodeName:addons-241374 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:11:00.514614  655385 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-241374"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:11:00.514684  655385 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-241374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-241374 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:11:00.514750  655385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:11:00.525903  655385 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:11:00.525986  655385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:11:00.537663  655385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0108 20:11:00.559495  655385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:11:00.581920  655385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0108 20:11:00.603421  655385 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:11:00.608509  655385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:11:00.622350  655385 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374 for IP: 192.168.49.2
	I0108 20:11:00.622380  655385 certs.go:190] acquiring lock for shared ca certs: {Name:mk8baa4ad3918f12788abe17f587583afd1a9c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:00.622515  655385 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key
	I0108 20:11:00.922210  655385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt ...
	I0108 20:11:00.922240  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt: {Name:mk92c91152f76019bbef4006e8b0b34ebb604bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:00.923005  655385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key ...
	I0108 20:11:00.923022  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key: {Name:mk7eba45de717bff3b8a03fb9129e031dc2e4c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:00.923561  655385 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key
	I0108 20:11:01.807886  655385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt ...
	I0108 20:11:01.807918  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt: {Name:mk596c3161ff7c8c86950c6b75fc2c13a4b80648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:01.808107  655385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key ...
	I0108 20:11:01.808119  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key: {Name:mk359d800b01101b038d8c376ec3d7b0ca989d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:01.808236  655385 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.key
	I0108 20:11:01.808255  655385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt with IP's: []
	I0108 20:11:02.104671  655385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt ...
	I0108 20:11:02.104718  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: {Name:mk4b7fa82ca6959d46646570a9f2e8403b915ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:02.105019  655385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.key ...
	I0108 20:11:02.105034  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.key: {Name:mkddb1d72ddc0e8e61b75bc721feccc3112ded27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:02.105837  655385 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.key.dd3b5fb2
	I0108 20:11:02.105870  655385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:11:02.666961  655385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.crt.dd3b5fb2 ...
	I0108 20:11:02.666993  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.crt.dd3b5fb2: {Name:mk33d5070c78edd016eeffeae3166613c2febd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:02.667237  655385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.key.dd3b5fb2 ...
	I0108 20:11:02.667251  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.key.dd3b5fb2: {Name:mkb2734a31d3879e74badd0fd9941111f051fbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:02.667736  655385 certs.go:337] copying /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.crt
	I0108 20:11:02.667843  655385 certs.go:341] copying /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.key
	I0108 20:11:02.667935  655385 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.key
	I0108 20:11:02.667985  655385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.crt with IP's: []
	I0108 20:11:02.857458  655385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.crt ...
	I0108 20:11:02.857490  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.crt: {Name:mk4a77c31bdc607bdc65664b20f275edcd89deba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:02.857684  655385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.key ...
	I0108 20:11:02.857698  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.key: {Name:mk3a9bedd42eecacd0f1fe1bbbe9f73570760180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:02.857915  655385 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:11:02.857961  655385 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:11:02.857997  655385 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:11:02.858029  655385 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem (1679 bytes)
	I0108 20:11:02.858649  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:11:02.887361  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:11:02.916616  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:11:02.945232  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:11:02.974151  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:11:03.004478  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:11:03.036447  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:11:03.066918  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 20:11:03.099355  655385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:11:03.129934  655385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:11:03.153710  655385 ssh_runner.go:195] Run: openssl version
	I0108 20:11:03.161056  655385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:11:03.173359  655385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:11:03.178319  655385 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:11:03.178403  655385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:11:03.187205  655385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:11:03.198825  655385 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:11:03.203374  655385 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:11:03.203425  655385 kubeadm.go:404] StartCluster: {Name:addons-241374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-241374 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:03.203509  655385 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 20:11:03.203570  655385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:11:03.245781  655385 cri.go:89] found id: ""
	I0108 20:11:03.245901  655385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:11:03.256690  655385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:11:03.269248  655385 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:11:03.269324  655385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:11:03.280121  655385 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:11:03.280162  655385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:11:03.343413  655385 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:11:03.343481  655385 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:11:03.392884  655385 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:11:03.392963  655385 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:11:03.393025  655385 kubeadm.go:322] OS: Linux
	I0108 20:11:03.393074  655385 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:11:03.393130  655385 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:11:03.393193  655385 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:11:03.393253  655385 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:11:03.393321  655385 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:11:03.393390  655385 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:11:03.393453  655385 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 20:11:03.393521  655385 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 20:11:03.393603  655385 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 20:11:03.475047  655385 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:11:03.475198  655385 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:11:03.475318  655385 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:11:03.721347  655385 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:11:03.723903  655385 out.go:204]   - Generating certificates and keys ...
	I0108 20:11:03.724084  655385 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:11:03.724304  655385 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:11:04.570836  655385 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:11:05.755773  655385 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:11:05.971956  655385 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:11:06.138176  655385 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:11:06.536681  655385 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:11:06.536929  655385 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-241374 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:11:07.007485  655385 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:11:07.007783  655385 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-241374 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:11:07.261843  655385 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:11:08.147545  655385 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:11:08.583374  655385 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:11:08.583803  655385 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:11:09.546846  655385 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:11:10.531086  655385 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:11:10.669600  655385 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:11:10.849144  655385 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:11:10.849762  655385 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:11:10.854529  655385 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:11:10.856930  655385 out.go:204]   - Booting up control plane ...
	I0108 20:11:10.857052  655385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:11:10.857133  655385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:11:10.858134  655385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:11:10.874183  655385 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:11:10.875246  655385 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:11:10.875462  655385 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:11:10.981584  655385 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:11:19.485213  655385 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503767 seconds
	I0108 20:11:19.485579  655385 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:11:19.501716  655385 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:11:20.039208  655385 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:11:20.039441  655385 kubeadm.go:322] [mark-control-plane] Marking the node addons-241374 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:11:20.553955  655385 kubeadm.go:322] [bootstrap-token] Using token: vk16fl.0rxrdwzl8l4nqpis
	I0108 20:11:20.556003  655385 out.go:204]   - Configuring RBAC rules ...
	I0108 20:11:20.556128  655385 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:11:20.561647  655385 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:11:20.569535  655385 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:11:20.573669  655385 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:11:20.577594  655385 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:11:20.583296  655385 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:11:20.597832  655385 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:11:20.904773  655385 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:11:21.004660  655385 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:11:21.006668  655385 kubeadm.go:322] 
	I0108 20:11:21.006746  655385 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:11:21.006754  655385 kubeadm.go:322] 
	I0108 20:11:21.006827  655385 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:11:21.006858  655385 kubeadm.go:322] 
	I0108 20:11:21.006884  655385 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:11:21.006962  655385 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:11:21.007015  655385 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:11:21.007026  655385 kubeadm.go:322] 
	I0108 20:11:21.007079  655385 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:11:21.007088  655385 kubeadm.go:322] 
	I0108 20:11:21.007133  655385 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:11:21.007140  655385 kubeadm.go:322] 
	I0108 20:11:21.007190  655385 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:11:21.007265  655385 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:11:21.007333  655385 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:11:21.007343  655385 kubeadm.go:322] 
	I0108 20:11:21.007457  655385 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:11:21.007533  655385 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:11:21.007541  655385 kubeadm.go:322] 
	I0108 20:11:21.007620  655385 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vk16fl.0rxrdwzl8l4nqpis \
	I0108 20:11:21.007720  655385 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e7aa231785652d24090e2cd097637f46032eb43e585bbef4633ff038c4bd0902 \
	I0108 20:11:21.007770  655385 kubeadm.go:322] 	--control-plane 
	I0108 20:11:21.007781  655385 kubeadm.go:322] 
	I0108 20:11:21.007861  655385 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:11:21.007872  655385 kubeadm.go:322] 
	I0108 20:11:21.007963  655385 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vk16fl.0rxrdwzl8l4nqpis \
	I0108 20:11:21.008075  655385 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e7aa231785652d24090e2cd097637f46032eb43e585bbef4633ff038c4bd0902 
	I0108 20:11:21.012204  655385 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:11:21.012334  655385 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:11:21.012366  655385 cni.go:84] Creating CNI manager for ""
	I0108 20:11:21.012380  655385 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:11:21.015964  655385 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:11:21.018136  655385 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:11:21.024333  655385 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:11:21.024405  655385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:11:21.073482  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:11:22.057233  655385 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:11:22.057402  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:22.057505  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=addons-241374 minikube.k8s.io/updated_at=2024_01_08T20_11_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:22.073796  655385 ops.go:34] apiserver oom_adj: -16
	I0108 20:11:22.271367  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:22.772275  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:23.271975  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:23.771911  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:24.271546  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:24.772480  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:25.271764  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:25.772330  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:26.272253  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:26.771753  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:27.271893  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:27.771515  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:28.271971  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:28.772388  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:29.271499  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:29.772039  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:30.271796  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:30.772317  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:31.271984  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:31.771567  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:32.271928  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:32.772480  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:33.272188  655385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:33.446881  655385 kubeadm.go:1088] duration metric: took 11.389535248s to wait for elevateKubeSystemPrivileges.
	I0108 20:11:33.446914  655385 kubeadm.go:406] StartCluster complete in 30.243492675s
	I0108 20:11:33.446931  655385 settings.go:142] acquiring lock: {Name:mkb63cd96d7a856f465b0592d8a592dc849b8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:33.447671  655385 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:11:33.448111  655385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/kubeconfig: {Name:mk40e5900c8ed31a9e7a0515010236c17752c8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:33.450208  655385 config.go:182] Loaded profile config "addons-241374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:11:33.450265  655385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:11:33.450473  655385 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 20:11:33.450631  655385 addons.go:69] Setting yakd=true in profile "addons-241374"
	I0108 20:11:33.450652  655385 addons.go:237] Setting addon yakd=true in "addons-241374"
	I0108 20:11:33.450713  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.451186  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.451746  655385 addons.go:69] Setting inspektor-gadget=true in profile "addons-241374"
	I0108 20:11:33.451795  655385 addons.go:237] Setting addon inspektor-gadget=true in "addons-241374"
	I0108 20:11:33.451835  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.452246  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.452801  655385 addons.go:69] Setting cloud-spanner=true in profile "addons-241374"
	I0108 20:11:33.452821  655385 addons.go:237] Setting addon cloud-spanner=true in "addons-241374"
	I0108 20:11:33.452827  655385 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-241374"
	I0108 20:11:33.452848  655385 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-241374"
	I0108 20:11:33.452854  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.452884  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.453270  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.453310  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.455485  655385 addons.go:69] Setting registry=true in profile "addons-241374"
	I0108 20:11:33.455519  655385 addons.go:237] Setting addon registry=true in "addons-241374"
	I0108 20:11:33.455567  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.456012  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.469067  655385 addons.go:69] Setting storage-provisioner=true in profile "addons-241374"
	I0108 20:11:33.469102  655385 addons.go:237] Setting addon storage-provisioner=true in "addons-241374"
	I0108 20:11:33.469154  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.469626  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.452817  655385 addons.go:69] Setting metrics-server=true in profile "addons-241374"
	I0108 20:11:33.480025  655385 addons.go:237] Setting addon metrics-server=true in "addons-241374"
	I0108 20:11:33.480083  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.480529  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.489087  655385 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-241374"
	I0108 20:11:33.489129  655385 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-241374"
	I0108 20:11:33.491034  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.495994  655385 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-241374"
	I0108 20:11:33.496062  655385 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-241374"
	I0108 20:11:33.496105  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.496575  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.513081  655385 addons.go:69] Setting default-storageclass=true in profile "addons-241374"
	I0108 20:11:33.513122  655385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-241374"
	I0108 20:11:33.513468  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.513841  655385 addons.go:69] Setting volumesnapshots=true in profile "addons-241374"
	I0108 20:11:33.513859  655385 addons.go:237] Setting addon volumesnapshots=true in "addons-241374"
	I0108 20:11:33.513898  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.514286  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.529884  655385 addons.go:69] Setting gcp-auth=true in profile "addons-241374"
	I0108 20:11:33.529942  655385 mustload.go:65] Loading cluster: addons-241374
	I0108 20:11:33.530166  655385 config.go:182] Loaded profile config "addons-241374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:11:33.530457  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.547005  655385 addons.go:69] Setting ingress=true in profile "addons-241374"
	I0108 20:11:33.547042  655385 addons.go:237] Setting addon ingress=true in "addons-241374"
	I0108 20:11:33.547111  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.547584  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.569334  655385 addons.go:69] Setting ingress-dns=true in profile "addons-241374"
	I0108 20:11:33.569374  655385 addons.go:237] Setting addon ingress-dns=true in "addons-241374"
	I0108 20:11:33.569432  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.569905  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.722171  655385 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 20:11:33.734964  655385 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 20:11:33.743747  655385 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 20:11:33.743786  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 20:11:33.743869  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.735208  655385 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 20:11:33.746565  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 20:11:33.746648  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.755386  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.735217  655385 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 20:11:33.735221  655385 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 20:11:33.767279  655385 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:11:33.767235  655385 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 20:11:33.771699  655385 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:11:33.782224  655385 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 20:11:33.775334  655385 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:11:33.775365  655385 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 20:11:33.775374  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:11:33.792025  655385 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 20:11:33.792045  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 20:11:33.792052  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 20:11:33.792084  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.792104  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.792032  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.821327  655385 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 20:11:33.792046  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 20:11:33.826153  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.843279  655385 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 20:11:33.843300  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 20:11:33.843364  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.882579  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 20:11:33.884906  655385 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 20:11:33.884946  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 20:11:33.885112  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.905179  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 20:11:33.919355  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 20:11:33.914479  655385 addons.go:237] Setting addon default-storageclass=true in "addons-241374"
	I0108 20:11:33.915597  655385 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-241374"
	I0108 20:11:33.919881  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.922745  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.937056  655385 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 20:11:33.939649  655385 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:11:33.939667  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 20:11:33.939731  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.959821  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 20:11:33.961649  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 20:11:33.966170  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 20:11:33.968299  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 20:11:33.966470  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:33.939177  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:33.975545  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 20:11:33.973511  655385 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 20:11:33.974205  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:33.974669  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:33.987004  655385 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:11:33.988945  655385 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:11:33.991450  655385 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 20:11:33.997602  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 20:11:33.997633  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 20:11:33.997705  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:33.991835  655385 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:11:34.028622  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 20:11:34.028705  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:34.047084  655385 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-241374" context rescaled to 1 replicas
	I0108 20:11:34.047127  655385 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:11:34.049400  655385 out.go:177] * Verifying Kubernetes components...
	I0108 20:11:34.052083  655385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:11:34.068896  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.083870  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.083870  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.096664  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.113457  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.154980  655385 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:11:34.155004  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:11:34.155081  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:34.156822  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.177450  655385 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 20:11:34.179868  655385 out.go:177]   - Using image docker.io/busybox:stable
	I0108 20:11:34.182022  655385 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:11:34.182046  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 20:11:34.182120  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:34.190759  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.258825  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.260012  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.261787  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.271489  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:34.311208  655385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:11:34.426377  655385 node_ready.go:35] waiting up to 6m0s for node "addons-241374" to be "Ready" ...
	I0108 20:11:34.430054  655385 node_ready.go:49] node "addons-241374" has status "Ready":"True"
	I0108 20:11:34.430124  655385 node_ready.go:38] duration metric: took 3.669841ms waiting for node "addons-241374" to be "Ready" ...
	I0108 20:11:34.430149  655385 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:11:34.439433  655385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxf4x" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:34.658681  655385 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 20:11:34.658707  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 20:11:34.732641  655385 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 20:11:34.732711  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 20:11:34.826752  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:11:34.854920  655385 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 20:11:34.854995  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 20:11:34.856698  655385 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 20:11:34.856760  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 20:11:34.897414  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:11:34.930596  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 20:11:34.965018  655385 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 20:11:34.965097  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 20:11:34.984526  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:11:35.089772  655385 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 20:11:35.089857  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 20:11:35.103765  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:11:35.194517  655385 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 20:11:35.194599  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 20:11:35.196474  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:11:35.232084  655385 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 20:11:35.232171  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 20:11:35.309269  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:11:35.313916  655385 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 20:11:35.313995  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 20:11:35.364549  655385 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:11:35.364616  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 20:11:35.377069  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 20:11:35.377163  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 20:11:35.416750  655385 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 20:11:35.416826  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 20:11:35.443105  655385 pod_ready.go:97] error getting pod "coredns-5dd5756b68-kxf4x" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-kxf4x" not found
	I0108 20:11:35.443194  655385 pod_ready.go:81] duration metric: took 1.003686999s waiting for pod "coredns-5dd5756b68-kxf4x" in "kube-system" namespace to be "Ready" ...
	E0108 20:11:35.443241  655385 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kxf4x" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-kxf4x" not found
	I0108 20:11:35.443276  655385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:35.487758  655385 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 20:11:35.487828  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 20:11:35.495949  655385 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 20:11:35.495970  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 20:11:35.548016  655385 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:11:35.548039  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 20:11:35.615768  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:11:35.623430  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 20:11:35.623508  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 20:11:35.640309  655385 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 20:11:35.640389  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 20:11:35.719596  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 20:11:35.719668  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 20:11:35.761209  655385 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 20:11:35.761238  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 20:11:35.827511  655385 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:11:35.827544  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 20:11:35.855537  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:11:35.920899  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 20:11:35.920928  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 20:11:36.073819  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:11:36.093520  655385 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 20:11:36.093558  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 20:11:36.104258  655385 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:11:36.104287  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 20:11:36.192628  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 20:11:36.192657  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 20:11:36.395190  655385 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:11:36.395214  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 20:11:36.395838  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:11:36.462742  655385 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 20:11:36.462770  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 20:11:36.555097  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:11:36.736231  655385 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 20:11:36.736258  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 20:11:37.090688  655385 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 20:11:37.090761  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 20:11:37.454430  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:37.490290  655385 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 20:11:37.490352  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 20:11:37.520224  655385 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 20:11:37.520304  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 20:11:37.545877  655385 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:11:37.545949  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 20:11:37.604888  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:11:37.873922  655385 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.562672617s)
	I0108 20:11:37.873993  655385 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 20:11:39.461552  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:40.200775  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.303276374s)
	I0108 20:11:40.200874  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.270207923s)
	I0108 20:11:40.200939  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.216342345s)
	I0108 20:11:40.200970  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.097131948s)
	I0108 20:11:40.201506  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.374678063s)
	W0108 20:11:40.242298  655385 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0108 20:11:40.574134  655385 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 20:11:40.574283  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:40.600125  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:41.284872  655385 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 20:11:41.413132  655385 addons.go:237] Setting addon gcp-auth=true in "addons-241374"
	I0108 20:11:41.413190  655385 host.go:66] Checking if "addons-241374" exists ...
	I0108 20:11:41.413719  655385 cli_runner.go:164] Run: docker container inspect addons-241374 --format={{.State.Status}}
	I0108 20:11:41.443826  655385 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 20:11:41.443875  655385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241374
	I0108 20:11:41.474216  655385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/addons-241374/id_rsa Username:docker}
	I0108 20:11:41.950255  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:42.462385  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.846531127s)
	I0108 20:11:42.462428  655385 addons.go:473] Verifying addon registry=true in "addons-241374"
	I0108 20:11:42.466792  655385 out.go:177] * Verifying registry addon...
	I0108 20:11:42.462592  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.152975836s)
	I0108 20:11:42.462802  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.266259001s)
	I0108 20:11:42.462866  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.60709623s)
	I0108 20:11:42.462941  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.388988557s)
	I0108 20:11:42.462981  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.067115178s)
	I0108 20:11:42.463071  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.907926895s)
	I0108 20:11:42.470297  655385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 20:11:42.470535  655385 addons.go:473] Verifying addon ingress=true in "addons-241374"
	I0108 20:11:42.476350  655385 out.go:177] * Verifying ingress addon...
	I0108 20:11:42.470708  655385 addons.go:473] Verifying addon metrics-server=true in "addons-241374"
	W0108 20:11:42.470733  655385 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:11:42.475318  655385 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:11:42.478493  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:42.479299  655385 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 20:11:42.479500  655385 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-241374 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 20:11:42.479605  655385 retry.go:31] will retry after 176.684292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:11:42.487300  655385 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 20:11:42.487329  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:42.659156  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:11:42.985586  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.005713  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:43.478798  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.486408  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:43.929661  655385 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.485804596s)
	I0108 20:11:43.935836  655385 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:11:43.930320  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.325344071s)
	I0108 20:11:43.935997  655385 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-241374"
	I0108 20:11:43.942425  655385 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 20:11:43.942439  655385 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 20:11:43.949555  655385 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 20:11:43.949650  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 20:11:43.954324  655385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 20:11:43.964087  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:43.967155  655385 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:11:43.967218  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:43.976149  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.983725  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:44.049414  655385 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 20:11:44.049445  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 20:11:44.112160  655385 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:11:44.112183  655385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 20:11:44.197392  655385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:11:44.460042  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:44.479915  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:44.490778  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:44.592825  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.933607891s)
	I0108 20:11:44.962964  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:44.975895  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:44.984193  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:45.369565  655385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.172071773s)
	I0108 20:11:45.372626  655385 addons.go:473] Verifying addon gcp-auth=true in "addons-241374"
	I0108 20:11:45.377926  655385 out.go:177] * Verifying gcp-auth addon...
	I0108 20:11:45.381152  655385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 20:11:45.396973  655385 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 20:11:45.397059  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:45.461370  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:45.476368  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:45.484344  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:45.885430  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:45.961426  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:45.976914  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:45.985240  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:46.385669  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:46.450022  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:46.460409  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:46.476632  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:46.483801  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:46.885844  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:46.959892  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:46.975204  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:46.983473  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:47.385514  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:47.462149  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.476324  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:47.484647  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:47.885518  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:47.960432  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.975617  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:47.984271  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:48.385653  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:48.451173  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:48.460919  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:48.477703  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:48.484699  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:48.885853  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:48.960406  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:48.974923  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:48.984250  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:49.385020  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:49.460608  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:49.475509  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:49.484108  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:49.884781  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:49.959796  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:49.975825  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:49.984410  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:50.385865  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:50.460108  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:50.476549  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:50.484199  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:50.885623  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:50.953305  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:50.959999  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:50.975742  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:50.984338  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:51.386014  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:51.468468  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:51.482916  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:51.494485  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:51.886424  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:51.960876  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:51.986551  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:51.990697  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:52.385667  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:52.460315  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:52.474719  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:52.484088  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:52.884688  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:52.960097  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:52.975366  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:52.984048  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:53.384960  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:53.450261  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:53.461311  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:53.475069  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:53.484809  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:53.885722  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:53.961072  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:53.975645  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:53.984519  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:54.385923  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:54.459877  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:54.475232  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:54.483872  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:54.884816  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:54.960287  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:54.975612  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:54.983878  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:55.385594  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:55.450457  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:55.464071  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:55.475806  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:55.484499  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:55.885836  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:55.960298  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:55.975459  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:55.983566  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:56.385625  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:56.459465  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:56.475772  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:56.484223  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:56.885077  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:56.959927  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:56.975698  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:56.983967  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:57.385612  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:57.454336  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:57.459986  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:57.475533  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:57.483965  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:57.884933  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:57.960234  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:57.975011  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:57.984442  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:58.385450  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:58.460351  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:58.475631  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:58.484265  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:58.885431  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:58.960310  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:58.975670  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:58.984065  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:59.384735  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:59.460902  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:59.475405  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:59.483927  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:59.884770  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:59.949637  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:59.960414  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:59.974940  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:59.984457  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:00.387197  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:00.462003  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:00.476557  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:00.484810  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:00.885090  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:00.960311  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:00.974846  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:00.984400  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:01.387382  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:01.460465  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:01.476953  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:01.484220  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:01.885941  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:01.949867  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:01.960161  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:01.975509  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:01.983749  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:02.384954  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:02.460129  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:02.479311  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:02.485521  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:02.886017  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:02.960873  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:02.975303  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:02.983284  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:03.385549  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:03.460549  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:03.474739  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:03.485910  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:03.885725  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:03.950745  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:03.959957  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:03.975908  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:03.984237  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:04.385316  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:04.460466  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:04.474921  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:04.483682  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:04.885451  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:04.960314  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:04.975409  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:04.983805  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:05.385565  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:05.460351  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:05.474871  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:05.484330  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:05.885546  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:05.951837  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:05.961535  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:05.976130  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:05.985310  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:06.384982  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:06.460559  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:06.475181  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:06.484621  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:06.885571  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:06.959736  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:06.975246  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:06.983911  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:07.385495  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:07.462595  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:07.476042  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:07.484394  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:07.885490  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:07.961066  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:07.976063  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:07.984484  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:08.385051  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:08.451317  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:08.461282  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:08.475845  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:08.484689  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:08.886007  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:08.973486  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:08.982807  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:08.995106  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:09.385197  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:09.461350  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:09.476472  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:09.484073  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:09.886347  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:09.960583  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:09.976026  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:09.984941  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:10.384859  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:10.451866  655385 pod_ready.go:102] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:10.460559  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:10.480405  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:10.485680  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:10.885708  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:10.960850  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:10.976170  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:10.985248  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:11.385209  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:11.461263  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:11.476427  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:11.490822  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:11.886284  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:11.960822  655385 pod_ready.go:92] pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:11.960848  655385 pod_ready.go:81] duration metric: took 36.517520864s waiting for pod "coredns-5dd5756b68-rr2qq" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.960862  655385 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.962328  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:11.966930  655385 pod_ready.go:92] pod "etcd-addons-241374" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:11.966958  655385 pod_ready.go:81] duration metric: took 6.088413ms waiting for pod "etcd-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.966973  655385 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.973795  655385 pod_ready.go:92] pod "kube-apiserver-addons-241374" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:11.973822  655385 pod_ready.go:81] duration metric: took 6.839661ms waiting for pod "kube-apiserver-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.973833  655385 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.978296  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:11.981838  655385 pod_ready.go:92] pod "kube-controller-manager-addons-241374" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:11.981863  655385 pod_ready.go:81] duration metric: took 8.022689ms waiting for pod "kube-controller-manager-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.981875  655385 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dbxdm" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.985527  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:11.988681  655385 pod_ready.go:92] pod "kube-proxy-dbxdm" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:11.988704  655385 pod_ready.go:81] duration metric: took 6.822257ms waiting for pod "kube-proxy-dbxdm" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:11.988716  655385 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:12.348765  655385 pod_ready.go:92] pod "kube-scheduler-addons-241374" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:12.348794  655385 pod_ready.go:81] duration metric: took 360.070466ms waiting for pod "kube-scheduler-addons-241374" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:12.348805  655385 pod_ready.go:38] duration metric: took 37.918631489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:12:12.348820  655385 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:12:12.348887  655385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:12:12.378629  655385 api_server.go:72] duration metric: took 38.331471815s to wait for apiserver process to appear ...
	I0108 20:12:12.378662  655385 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:12:12.378683  655385 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 20:12:12.388359  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:12.392032  655385 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 20:12:12.393360  655385 api_server.go:141] control plane version: v1.28.4
	I0108 20:12:12.393386  655385 api_server.go:131] duration metric: took 14.716768ms to wait for apiserver health ...
	I0108 20:12:12.393395  655385 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:12:12.462467  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:12.476903  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:12.485582  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:12.557975  655385 system_pods.go:59] 18 kube-system pods found
	I0108 20:12:12.558012  655385 system_pods.go:61] "coredns-5dd5756b68-rr2qq" [570c9d87-6804-49f4-a57a-f8eb6a9e5cc9] Running
	I0108 20:12:12.558023  655385 system_pods.go:61] "csi-hostpath-attacher-0" [2a8796f4-6ad2-41d8-a395-a2de4414513a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0108 20:12:12.558032  655385 system_pods.go:61] "csi-hostpath-resizer-0" [30787d38-8e9c-4230-b7af-15f053a9ae5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0108 20:12:12.558041  655385 system_pods.go:61] "csi-hostpathplugin-zc2fb" [bf1475c0-e4d8-4caa-92f1-9c0225945f11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 20:12:12.558052  655385 system_pods.go:61] "etcd-addons-241374" [89400310-b39d-4af7-ad8a-b6d889139fdb] Running
	I0108 20:12:12.558070  655385 system_pods.go:61] "kindnet-rdkbg" [fdb93d84-32e9-4f06-b552-37bc2b30306b] Running
	I0108 20:12:12.558075  655385 system_pods.go:61] "kube-apiserver-addons-241374" [c67d7812-80a4-4422-a95f-1f353e505643] Running
	I0108 20:12:12.558081  655385 system_pods.go:61] "kube-controller-manager-addons-241374" [25dda747-1784-4b8a-856c-f4bd7c4a794b] Running
	I0108 20:12:12.558093  655385 system_pods.go:61] "kube-ingress-dns-minikube" [43c088ed-8201-4eec-98b3-033c2c090aa5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 20:12:12.558099  655385 system_pods.go:61] "kube-proxy-dbxdm" [0f5ea36f-f4e8-48d7-a214-937f3113465d] Running
	I0108 20:12:12.558114  655385 system_pods.go:61] "kube-scheduler-addons-241374" [bb87bc9e-8dc9-47f8-bffd-8e524737537f] Running
	I0108 20:12:12.558122  655385 system_pods.go:61] "metrics-server-7c66d45ddc-9l75h" [f5f8810c-2a29-4938-bdcb-0822875f4a38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 20:12:12.558131  655385 system_pods.go:61] "nvidia-device-plugin-daemonset-w6v8f" [69e2a053-8552-4fbc-a1c1-810e10c9fc21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0108 20:12:12.558143  655385 system_pods.go:61] "registry-bs6q6" [ddd85010-11ab-4e86-bf0c-f8de74575f5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0108 20:12:12.558151  655385 system_pods.go:61] "registry-proxy-kdvlt" [e713e557-9ca9-4695-8167-78ff9123c199] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 20:12:12.558160  655385 system_pods.go:61] "snapshot-controller-58dbcc7b99-5s2mw" [482bcaa3-2588-4d23-bfa6-58f4529890a2] Running
	I0108 20:12:12.558172  655385 system_pods.go:61] "snapshot-controller-58dbcc7b99-nv7qj" [167a97ad-5f08-4ea7-a3c3-6178785ace7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 20:12:12.558179  655385 system_pods.go:61] "storage-provisioner" [1ae8873f-4791-4c98-9e88-0291e0e7f70a] Running
	I0108 20:12:12.558190  655385 system_pods.go:74] duration metric: took 164.788361ms to wait for pod list to return data ...
	I0108 20:12:12.558199  655385 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:12:12.747244  655385 default_sa.go:45] found service account: "default"
	I0108 20:12:12.747275  655385 default_sa.go:55] duration metric: took 189.064568ms for default service account to be created ...
	I0108 20:12:12.747288  655385 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:12:12.885322  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:12.955257  655385 system_pods.go:86] 18 kube-system pods found
	I0108 20:12:12.955297  655385 system_pods.go:89] "coredns-5dd5756b68-rr2qq" [570c9d87-6804-49f4-a57a-f8eb6a9e5cc9] Running
	I0108 20:12:12.955308  655385 system_pods.go:89] "csi-hostpath-attacher-0" [2a8796f4-6ad2-41d8-a395-a2de4414513a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0108 20:12:12.955319  655385 system_pods.go:89] "csi-hostpath-resizer-0" [30787d38-8e9c-4230-b7af-15f053a9ae5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0108 20:12:12.955328  655385 system_pods.go:89] "csi-hostpathplugin-zc2fb" [bf1475c0-e4d8-4caa-92f1-9c0225945f11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 20:12:12.955340  655385 system_pods.go:89] "etcd-addons-241374" [89400310-b39d-4af7-ad8a-b6d889139fdb] Running
	I0108 20:12:12.955347  655385 system_pods.go:89] "kindnet-rdkbg" [fdb93d84-32e9-4f06-b552-37bc2b30306b] Running
	I0108 20:12:12.955354  655385 system_pods.go:89] "kube-apiserver-addons-241374" [c67d7812-80a4-4422-a95f-1f353e505643] Running
	I0108 20:12:12.955360  655385 system_pods.go:89] "kube-controller-manager-addons-241374" [25dda747-1784-4b8a-856c-f4bd7c4a794b] Running
	I0108 20:12:12.955373  655385 system_pods.go:89] "kube-ingress-dns-minikube" [43c088ed-8201-4eec-98b3-033c2c090aa5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 20:12:12.955379  655385 system_pods.go:89] "kube-proxy-dbxdm" [0f5ea36f-f4e8-48d7-a214-937f3113465d] Running
	I0108 20:12:12.955388  655385 system_pods.go:89] "kube-scheduler-addons-241374" [bb87bc9e-8dc9-47f8-bffd-8e524737537f] Running
	I0108 20:12:12.955395  655385 system_pods.go:89] "metrics-server-7c66d45ddc-9l75h" [f5f8810c-2a29-4938-bdcb-0822875f4a38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 20:12:12.955408  655385 system_pods.go:89] "nvidia-device-plugin-daemonset-w6v8f" [69e2a053-8552-4fbc-a1c1-810e10c9fc21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0108 20:12:12.955417  655385 system_pods.go:89] "registry-bs6q6" [ddd85010-11ab-4e86-bf0c-f8de74575f5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0108 20:12:12.955424  655385 system_pods.go:89] "registry-proxy-kdvlt" [e713e557-9ca9-4695-8167-78ff9123c199] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 20:12:12.955433  655385 system_pods.go:89] "snapshot-controller-58dbcc7b99-5s2mw" [482bcaa3-2588-4d23-bfa6-58f4529890a2] Running
	I0108 20:12:12.955442  655385 system_pods.go:89] "snapshot-controller-58dbcc7b99-nv7qj" [167a97ad-5f08-4ea7-a3c3-6178785ace7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 20:12:12.955452  655385 system_pods.go:89] "storage-provisioner" [1ae8873f-4791-4c98-9e88-0291e0e7f70a] Running
	I0108 20:12:12.955460  655385 system_pods.go:126] duration metric: took 208.166167ms to wait for k8s-apps to be running ...
	I0108 20:12:12.955471  655385 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:12:12.955528  655385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:12:12.961572  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:12.977484  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:12.986511  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:12.989559  655385 system_svc.go:56] duration metric: took 34.079213ms WaitForService to wait for kubelet.
	I0108 20:12:12.989587  655385 kubeadm.go:581] duration metric: took 38.942435042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:12:12.989616  655385 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:12:13.147593  655385 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:12:13.147627  655385 node_conditions.go:123] node cpu capacity is 2
	I0108 20:12:13.147642  655385 node_conditions.go:105] duration metric: took 158.006671ms to run NodePressure ...
	I0108 20:12:13.147655  655385 start.go:228] waiting for startup goroutines ...
	I0108 20:12:13.386378  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:13.461683  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:13.475150  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:13.487582  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:13.885871  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:13.961269  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:13.975370  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:13.984069  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:14.385760  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:14.460004  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:14.475644  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:14.484531  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:14.886298  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:14.960112  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:14.976562  655385 kapi.go:107] duration metric: took 32.506263548s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 20:12:14.984051  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:15.386027  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:15.460475  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:15.489976  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:15.884925  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:15.964047  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:15.984416  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:16.385329  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:16.460959  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:16.484336  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:16.884879  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:16.960493  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:16.988055  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:17.385189  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:17.460954  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:17.484485  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:17.885692  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:17.960207  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:17.984263  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:18.385339  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:18.460426  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:18.483900  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:18.885841  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:18.961126  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:18.986213  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:19.384822  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:19.461101  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:19.484186  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:19.885230  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:19.960308  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:19.983828  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:20.384846  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:20.459907  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:20.484232  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:20.884831  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:20.960053  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:20.984743  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:21.385817  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:21.460598  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:21.483897  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:21.884584  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:21.960814  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:21.984135  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:22.395943  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:22.460413  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:22.484774  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:22.885783  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:22.960899  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:22.984901  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:23.387307  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:23.482781  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:23.490163  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:23.888361  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:23.961729  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:23.987760  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:24.390417  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:24.464328  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:24.485063  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:24.893546  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:24.960537  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:24.984586  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:25.389875  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:25.460925  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:25.484623  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:25.895931  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:25.961489  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:25.983992  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:26.385046  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:26.460041  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:26.484254  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:26.885261  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:26.960581  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:26.983865  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:27.385584  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:27.461046  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:27.485116  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:27.885087  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:27.960808  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:27.984728  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:28.385843  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:28.460054  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:28.484876  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:28.885661  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:28.960368  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:28.984532  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:29.385060  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:29.461039  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:29.485255  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:29.886294  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:29.960688  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:29.984494  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:30.385148  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:30.461483  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:30.484347  655385 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:30.886239  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:30.960762  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:30.984579  655385 kapi.go:107] duration metric: took 48.505275568s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 20:12:31.385620  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:31.461930  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:31.887432  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:31.961674  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:32.385331  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:32.460765  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:32.885045  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:32.961172  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:33.385787  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:33.461331  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:33.884948  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:33.960351  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:34.384955  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:34.460806  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:34.885641  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:34.960518  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:35.385233  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:35.462844  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:35.885576  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:35.961616  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:36.385989  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:36.461708  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:36.885867  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:36.961629  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:37.385523  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:37.460480  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:37.886486  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:37.960902  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:38.385444  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:38.463332  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:38.885225  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:38.961077  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:39.384838  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:39.460365  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:39.884865  655385 kapi.go:107] duration metric: took 54.503712899s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 20:12:39.888550  655385 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-241374 cluster.
	I0108 20:12:39.890776  655385 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 20:12:39.893157  655385 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 20:12:39.961078  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:40.459934  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:40.959983  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:41.459537  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:41.960346  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:42.465683  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:42.960242  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:43.460677  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:43.964971  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:44.469041  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:44.960901  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:45.460316  655385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:45.959587  655385 kapi.go:107] duration metric: took 1m2.005258565s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 20:12:45.961720  655385 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, inspektor-gadget, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0108 20:12:45.963692  655385 addons.go:508] enable addons completed in 1m12.513215365s: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass inspektor-gadget storage-provisioner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0108 20:12:45.963740  655385 start.go:233] waiting for cluster config update ...
	I0108 20:12:45.964287  655385 start.go:242] writing updated cluster config ...
	I0108 20:12:45.964605  655385 ssh_runner.go:195] Run: rm -f paused
	I0108 20:12:46.331471  655385 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:12:46.334019  655385 out.go:177] * Done! kubectl is now configured to use "addons-241374" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d7fde432ed8ed       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app           2                   ea24d5d6d23d0       hello-world-app-5d77478584-8jqtg
	92c5f7566be45       74077e780ec71       32 seconds ago       Running             nginx                     0                   b655c8e5918c6       nginx
	86d14216acc95       21648f71be814       About a minute ago   Running             headlamp                  0                   0abcaf94e8f91       headlamp-7ddfbb94ff-psmrn
	51b26859bd572       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   599a8542765be       gcp-auth-d4c87556c-bxvhn
	c15c1b81aa3ef       af594c6a879f2       2 minutes ago        Exited              patch                     2                   55f147f99bb85       ingress-nginx-admission-patch-sft7s
	bf7a353b3e9e2       7ce2150c8929b       2 minutes ago        Running             local-path-provisioner    0                   1e734b5414302       local-path-provisioner-78b46b4d5c-qswhm
	2d7320c8aff01       20e3f2db01e81       2 minutes ago        Running             yakd                      0                   72dcd0d0dc7a7       yakd-dashboard-9947fc6bf-hxqzk
	49b7d33d0a173       af594c6a879f2       2 minutes ago        Exited              create                    0                   f8235203b64f2       ingress-nginx-admission-create-mdlvt
	938fb9323555e       97e04611ad434       2 minutes ago        Running             coredns                   0                   26b7ba58409da       coredns-5dd5756b68-rr2qq
	8e198d56423a6       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   7ab731ba02460       storage-provisioner
	64c337107113c       3ca3ca488cf13       3 minutes ago        Running             kube-proxy                0                   d21636e3a21a9       kube-proxy-dbxdm
	aef02ed1b288e       04b4eaa3d3db8       3 minutes ago        Running             kindnet-cni               0                   83776695fb8e3       kindnet-rdkbg
	2f282dd2f6836       04b4c447bb9d4       3 minutes ago        Running             kube-apiserver            0                   c4635ae3711dd       kube-apiserver-addons-241374
	a9f1d3dea3b22       9961cbceaf234       3 minutes ago        Running             kube-controller-manager   0                   9a668b7c15bbd       kube-controller-manager-addons-241374
	9342cd0c2c83b       05c284c929889       3 minutes ago        Running             kube-scheduler            0                   6408126769b57       kube-scheduler-addons-241374
	cbbb097eaf0bd       9cdd6470f48c8       3 minutes ago        Running             etcd                      0                   a3be1eb2d9a02       etcd-addons-241374
	
	
	==> containerd <==
	Jan 08 20:14:27 addons-241374 containerd[739]: time="2024-01-08T20:14:27.139631523Z" level=info msg="cleaning up dead shim"
	Jan 08 20:14:27 addons-241374 containerd[739]: time="2024-01-08T20:14:27.150271304Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:14:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11207 runtime=io.containerd.runc.v2\n"
	Jan 08 20:14:27 addons-241374 containerd[739]: time="2024-01-08T20:14:27.347196811Z" level=info msg="RemoveContainer for \"a33206c23a93eabad6baf46d5b1011d8cf39178f9dec355aaa29e6e6d7e3bf60\""
	Jan 08 20:14:27 addons-241374 containerd[739]: time="2024-01-08T20:14:27.355631260Z" level=info msg="RemoveContainer for \"a33206c23a93eabad6baf46d5b1011d8cf39178f9dec355aaa29e6e6d7e3bf60\" returns successfully"
	Jan 08 20:14:27 addons-241374 containerd[739]: time="2024-01-08T20:14:27.370661207Z" level=info msg="RemoveContainer for \"809611f4ef4c2c98ba3391719acbb492e6f432d772f668611daede6c69b61605\""
	Jan 08 20:14:27 addons-241374 containerd[739]: time="2024-01-08T20:14:27.382768430Z" level=info msg="RemoveContainer for \"809611f4ef4c2c98ba3391719acbb492e6f432d772f668611daede6c69b61605\" returns successfully"
	Jan 08 20:14:29 addons-241374 containerd[739]: time="2024-01-08T20:14:29.106181527Z" level=info msg="StopContainer for \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\" with timeout 2 (s)"
	Jan 08 20:14:29 addons-241374 containerd[739]: time="2024-01-08T20:14:29.107048260Z" level=info msg="Stop container \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\" with signal terminated"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.115934946Z" level=info msg="Kill container \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\""
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.202456372Z" level=info msg="shim disconnected" id=0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.202509959Z" level=warning msg="cleaning up after shim disconnected" id=0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea namespace=k8s.io
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.202521340Z" level=info msg="cleaning up dead shim"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.214086200Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:14:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11304 runtime=io.containerd.runc.v2\n"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.217201021Z" level=info msg="StopContainer for \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\" returns successfully"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.217774079Z" level=info msg="StopPodSandbox for \"e67ddbcf14198afcad92eca921622fb73d1066568237bf39762d23d7384be728\""
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.217847900Z" level=info msg="Container to stop \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.252808169Z" level=info msg="shim disconnected" id=e67ddbcf14198afcad92eca921622fb73d1066568237bf39762d23d7384be728
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.253111001Z" level=warning msg="cleaning up after shim disconnected" id=e67ddbcf14198afcad92eca921622fb73d1066568237bf39762d23d7384be728 namespace=k8s.io
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.253206648Z" level=info msg="cleaning up dead shim"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.264410781Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:14:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11337 runtime=io.containerd.runc.v2\n"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.314149184Z" level=info msg="TearDown network for sandbox \"e67ddbcf14198afcad92eca921622fb73d1066568237bf39762d23d7384be728\" successfully"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.314328464Z" level=info msg="StopPodSandbox for \"e67ddbcf14198afcad92eca921622fb73d1066568237bf39762d23d7384be728\" returns successfully"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.365670612Z" level=info msg="RemoveContainer for \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\""
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.371173893Z" level=info msg="RemoveContainer for \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\" returns successfully"
	Jan 08 20:14:31 addons-241374 containerd[739]: time="2024-01-08T20:14:31.371710586Z" level=error msg="ContainerStatus for \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\": not found"
	
	
	==> coredns [938fb9323555ecfa6f059a2ed21688c2a612c7e2f37cec8b0be5bbaefa2b1359] <==
	[INFO] 10.244.0.17:34014 - 2235 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033222s
	[INFO] 10.244.0.17:34014 - 37475 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001862228s
	[INFO] 10.244.0.17:38683 - 49423 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001747916s
	[INFO] 10.244.0.17:38683 - 35324 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004693885s
	[INFO] 10.244.0.17:34014 - 23036 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001092077s
	[INFO] 10.244.0.17:34014 - 60449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000127071s
	[INFO] 10.244.0.17:38683 - 16836 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032673s
	[INFO] 10.244.0.17:37831 - 12799 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085357s
	[INFO] 10.244.0.17:37831 - 19994 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067585s
	[INFO] 10.244.0.17:37831 - 38317 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044119s
	[INFO] 10.244.0.17:37831 - 12080 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075856s
	[INFO] 10.244.0.17:37831 - 35994 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067273s
	[INFO] 10.244.0.17:37831 - 9694 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073206s
	[INFO] 10.244.0.17:37831 - 3639 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003898938s
	[INFO] 10.244.0.17:37831 - 27965 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002240876s
	[INFO] 10.244.0.17:37831 - 27573 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104746s
	[INFO] 10.244.0.17:48432 - 54985 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075651s
	[INFO] 10.244.0.17:48432 - 13986 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063229s
	[INFO] 10.244.0.17:48432 - 34315 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060324s
	[INFO] 10.244.0.17:48432 - 58860 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062457s
	[INFO] 10.244.0.17:48432 - 55859 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065165s
	[INFO] 10.244.0.17:48432 - 56447 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060454s
	[INFO] 10.244.0.17:48432 - 15240 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001378237s
	[INFO] 10.244.0.17:48432 - 51541 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001026806s
	[INFO] 10.244.0.17:48432 - 21331 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067536s
	
	
	==> describe nodes <==
	Name:               addons-241374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-241374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=addons-241374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_11_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-241374
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-241374
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:14:25 +0000   Mon, 08 Jan 2024 20:11:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:14:25 +0000   Mon, 08 Jan 2024 20:11:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:14:25 +0000   Mon, 08 Jan 2024 20:11:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:14:25 +0000   Mon, 08 Jan 2024 20:11:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-241374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a10aef65360443d847e59eae5179ecb
	  System UUID:                562f89d6-a627-4747-97ae-ba7298810b33
	  Boot ID:                    cf8959e1-1119-4140-86a9-5e54dd11ba57
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-8jqtg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-d4c87556c-bxvhn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  headlamp                    headlamp-7ddfbb94ff-psmrn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 coredns-5dd5756b68-rr2qq                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m3s
	  kube-system                 etcd-addons-241374                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m15s
	  kube-system                 kindnet-rdkbg                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m3s
	  kube-system                 kube-apiserver-addons-241374               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-controller-manager-addons-241374      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  kube-system                 kube-proxy-dbxdm                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                 kube-scheduler-addons-241374               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  local-path-storage          local-path-provisioner-78b46b4d5c-qswhm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-hxqzk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m1s   kube-proxy       
	  Normal  Starting                 3m16s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m15s  kubelet          Node addons-241374 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s  kubelet          Node addons-241374 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s  kubelet          Node addons-241374 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m15s  kubelet          Node addons-241374 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m15s  kubelet          Node addons-241374 status is now: NodeReady
	  Normal  RegisteredNode           3m4s   node-controller  Node addons-241374 event: Registered Node addons-241374 in Controller
	
	
	==> dmesg <==
	[  +0.000867] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001057] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001228] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +0.003300] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=000000007aebacca
	[  +0.001230] FS-Cache: O-key=[8] 'e63a5c0100000000'
	[  +0.000802] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000001ba12843
	[  +0.001177] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +2.625464] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001086] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=0000000019c2b840
	[  +0.001196] FS-Cache: O-key=[8] 'e53a5c0100000000'
	[  +0.000801] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001152] FS-Cache: N-key=[8] 'e53a5c0100000000'
	[  +0.329983] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001107] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000aa56b1d3
	[  +0.001269] FS-Cache: O-key=[8] 'ee3a5c0100000000'
	[  +0.000826] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000003b0b7e1f
	[  +0.001169] FS-Cache: N-key=[8] 'ee3a5c0100000000'
	[Jan 8 19:41] hrtimer: interrupt took 4780855 ns
	
	
	==> etcd [cbbb097eaf0bd8b808a8c26dd91aa10c8b943322e0211005078eca443195c090] <==
	{"level":"info","ts":"2024-01-08T20:11:13.30394Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-08T20:11:13.304394Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T20:11:13.304529Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:11:13.304552Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:11:13.30456Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:11:13.304953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-08T20:11:13.3051Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-08T20:11:13.393083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:11:13.393195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:11:13.393244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-08T20:11:13.393307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:13.393353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:13.393401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:13.393437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:13.395754Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:13.398452Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-241374 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:11:13.398606Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:11:13.398632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:11:13.399763Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-08T20:11:13.405647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:11:13.405816Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:11:13.407616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:11:13.434662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:13.43477Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:13.4348Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [51b26859bd5722f87153dea3fc7e928e5e691737459b3ab96db18565e44355ee] <==
	2024/01/08 20:12:39 GCP Auth Webhook started!
	2024/01/08 20:12:57 Ready to marshal response ...
	2024/01/08 20:12:57 Ready to write response ...
	2024/01/08 20:13:12 Ready to marshal response ...
	2024/01/08 20:13:12 Ready to write response ...
	2024/01/08 20:13:14 Ready to marshal response ...
	2024/01/08 20:13:14 Ready to write response ...
	2024/01/08 20:13:14 Ready to marshal response ...
	2024/01/08 20:13:14 Ready to write response ...
	2024/01/08 20:13:25 Ready to marshal response ...
	2024/01/08 20:13:25 Ready to write response ...
	2024/01/08 20:13:33 Ready to marshal response ...
	2024/01/08 20:13:33 Ready to write response ...
	2024/01/08 20:13:33 Ready to marshal response ...
	2024/01/08 20:13:33 Ready to write response ...
	2024/01/08 20:13:33 Ready to marshal response ...
	2024/01/08 20:13:33 Ready to write response ...
	2024/01/08 20:13:44 Ready to marshal response ...
	2024/01/08 20:13:44 Ready to write response ...
	2024/01/08 20:14:02 Ready to marshal response ...
	2024/01/08 20:14:02 Ready to write response ...
	2024/01/08 20:14:10 Ready to marshal response ...
	2024/01/08 20:14:10 Ready to write response ...
	
	
	==> kernel <==
	 20:14:36 up  2:56,  0 users,  load average: 0.95, 1.17, 1.49
	Linux addons-241374 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [aef02ed1b288ee92f969e8e38d9dce5d281044d728b3a24bd3badc2d5249d7b4] <==
	I0108 20:12:34.902677       1 main.go:227] handling current node
	I0108 20:12:44.912719       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:12:44.912744       1 main.go:227] handling current node
	I0108 20:12:54.924379       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:12:54.924871       1 main.go:227] handling current node
	I0108 20:13:04.929922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:04.929956       1 main.go:227] handling current node
	I0108 20:13:14.941291       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:14.941321       1 main.go:227] handling current node
	I0108 20:13:24.946070       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:24.946106       1 main.go:227] handling current node
	I0108 20:13:34.958157       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:34.958186       1 main.go:227] handling current node
	I0108 20:13:44.976671       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:44.976700       1 main.go:227] handling current node
	I0108 20:13:54.988512       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:54.988543       1 main.go:227] handling current node
	I0108 20:14:04.993703       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:04.993727       1 main.go:227] handling current node
	I0108 20:14:14.998204       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:14.998234       1 main.go:227] handling current node
	I0108 20:14:25.010788       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:25.010830       1 main.go:227] handling current node
	I0108 20:14:35.027333       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:35.027549       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2f282dd2f683697fc63b004837948c1fe1b4ac7ddbfb0b414b7a6ef93c2b526d] <==
	I0108 20:13:55.970299       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0108 20:13:56.983179       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0108 20:14:01.124226       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.124284       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.166862       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.168262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.193438       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.193632       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.215198       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.215483       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.262518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.262748       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.272745       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.272790       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.281950       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.282014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.299315       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:01.299357       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:01.961343       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0108 20:14:02.227579       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.129.118"}
	W0108 20:14:02.273236       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 20:14:02.299965       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 20:14:02.302259       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 20:14:10.999183       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.57.93"}
	I0108 20:14:25.877115       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [a9f1d3dea3b22f683281b6ae68d53a4fb9f3c30f45a50efdda0b55cf2d81907e] <==
	I0108 20:14:10.806313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="80.566µs"
	I0108 20:14:10.820506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.915µs"
	W0108 20:14:11.166644       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:11.166677       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:11.876147       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:11.876186       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:14:13.312243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.951µs"
	I0108 20:14:14.319256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.006µs"
	I0108 20:14:15.322219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.743µs"
	W0108 20:14:17.284869       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:17.284902       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:17.675542       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:17.675574       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:21.925986       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:21.926019       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:25.145906       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:25.145942       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:14:27.376937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.48µs"
	I0108 20:14:28.071262       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 20:14:28.079038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="7.36µs"
	I0108 20:14:28.086348       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0108 20:14:31.823674       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:31.823708       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:34.023863       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:34.023902       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [64c337107113c6524c19d81f54fe3e67af0f19811d76561e04885041dd5aedaf] <==
	I0108 20:11:34.682930       1 server_others.go:69] "Using iptables proxy"
	I0108 20:11:34.709860       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 20:11:34.768788       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:11:34.771007       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:11:34.771049       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:11:34.771057       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:11:34.771087       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:11:34.771329       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:11:34.771344       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:11:34.772148       1 config.go:188] "Starting service config controller"
	I0108 20:11:34.772196       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:11:34.772216       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:11:34.772220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:11:34.772754       1 config.go:315] "Starting node config controller"
	I0108 20:11:34.772761       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:11:34.872313       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:11:34.872379       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:11:34.872794       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9342cd0c2c83b671ffd037a7e7d6587432d3ba756d645f2f92cd069bc2fe56d1] <==
	W0108 20:11:18.580487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:11:18.580591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:11:18.580628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:11:18.584439       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:11:18.584585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:11:18.584677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:18.584782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:11:18.588312       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:11:18.588945       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:11:18.585303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:11:18.588634       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:11:18.589302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:11:18.588685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:11:18.589496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:11:18.588734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 20:11:18.588796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:11:18.588831       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:11:18.588865       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 20:11:18.588913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:18.597414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:18.597433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:18.597443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:18.597452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:11:18.597465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0108 20:11:19.778380       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:14:21 addons-241374 kubelet[1347]: I0108 20:14:21.412641    1347 scope.go:117] "RemoveContainer" containerID="4dd661d3a1645b0b6634b29dd6a8b814d961f5561bc85eaa22692157cd5549b3"
	Jan 08 20:14:25 addons-241374 kubelet[1347]: I0108 20:14:25.023680    1347 scope.go:117] "RemoveContainer" containerID="a33206c23a93eabad6baf46d5b1011d8cf39178f9dec355aaa29e6e6d7e3bf60"
	Jan 08 20:14:25 addons-241374 kubelet[1347]: E0108 20:14:25.023997    1347 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(43c088ed-8201-4eec-98b3-033c2c090aa5)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="43c088ed-8201-4eec-98b3-033c2c090aa5"
	Jan 08 20:14:26 addons-241374 kubelet[1347]: I0108 20:14:26.973674    1347 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxg2\" (UniqueName: \"kubernetes.io/projected/43c088ed-8201-4eec-98b3-033c2c090aa5-kube-api-access-plxg2\") pod \"43c088ed-8201-4eec-98b3-033c2c090aa5\" (UID: \"43c088ed-8201-4eec-98b3-033c2c090aa5\") "
	Jan 08 20:14:26 addons-241374 kubelet[1347]: I0108 20:14:26.978717    1347 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c088ed-8201-4eec-98b3-033c2c090aa5-kube-api-access-plxg2" (OuterVolumeSpecName: "kube-api-access-plxg2") pod "43c088ed-8201-4eec-98b3-033c2c090aa5" (UID: "43c088ed-8201-4eec-98b3-033c2c090aa5"). InnerVolumeSpecName "kube-api-access-plxg2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:14:27 addons-241374 kubelet[1347]: I0108 20:14:27.023389    1347 scope.go:117] "RemoveContainer" containerID="809611f4ef4c2c98ba3391719acbb492e6f432d772f668611daede6c69b61605"
	Jan 08 20:14:27 addons-241374 kubelet[1347]: I0108 20:14:27.074599    1347 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-plxg2\" (UniqueName: \"kubernetes.io/projected/43c088ed-8201-4eec-98b3-033c2c090aa5-kube-api-access-plxg2\") on node \"addons-241374\" DevicePath \"\""
	Jan 08 20:14:27 addons-241374 kubelet[1347]: I0108 20:14:27.337635    1347 scope.go:117] "RemoveContainer" containerID="a33206c23a93eabad6baf46d5b1011d8cf39178f9dec355aaa29e6e6d7e3bf60"
	Jan 08 20:14:27 addons-241374 kubelet[1347]: I0108 20:14:27.350333    1347 scope.go:117] "RemoveContainer" containerID="d7fde432ed8edf7acbc9f309dd4d2a7672b85c3cb881488444c31c6fbb2292ac"
	Jan 08 20:14:27 addons-241374 kubelet[1347]: E0108 20:14:27.350625    1347 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-8jqtg_default(af1d0863-8329-4b1d-854a-ba64d76c39fd)\"" pod="default/hello-world-app-5d77478584-8jqtg" podUID="af1d0863-8329-4b1d-854a-ba64d76c39fd"
	Jan 08 20:14:27 addons-241374 kubelet[1347]: I0108 20:14:27.359698    1347 scope.go:117] "RemoveContainer" containerID="809611f4ef4c2c98ba3391719acbb492e6f432d772f668611daede6c69b61605"
	Jan 08 20:14:29 addons-241374 kubelet[1347]: I0108 20:14:29.026695    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="34867381-4980-4cf9-b146-bb10668dec22" path="/var/lib/kubelet/pods/34867381-4980-4cf9-b146-bb10668dec22/volumes"
	Jan 08 20:14:29 addons-241374 kubelet[1347]: I0108 20:14:29.030186    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="43c088ed-8201-4eec-98b3-033c2c090aa5" path="/var/lib/kubelet/pods/43c088ed-8201-4eec-98b3-033c2c090aa5/volumes"
	Jan 08 20:14:29 addons-241374 kubelet[1347]: I0108 20:14:29.034557    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f2ab2107-7442-4c9c-b62a-e2ddb2280b16" path="/var/lib/kubelet/pods/f2ab2107-7442-4c9c-b62a-e2ddb2280b16/volumes"
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.362250    1347 scope.go:117] "RemoveContainer" containerID="0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea"
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.371448    1347 scope.go:117] "RemoveContainer" containerID="0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea"
	Jan 08 20:14:31 addons-241374 kubelet[1347]: E0108 20:14:31.371898    1347 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\": not found" containerID="0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea"
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.371948    1347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea"} err="failed to get container status \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"0534cc003fad6f6d777b36c2069d4e26af33bc8edafc43d0a4bd4d562f0ea8ea\": not found"
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.402729    1347 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8-webhook-cert\") pod \"6474e1da-cfe0-4fc5-aec3-521ea1e4dea8\" (UID: \"6474e1da-cfe0-4fc5-aec3-521ea1e4dea8\") "
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.402802    1347 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqrl7\" (UniqueName: \"kubernetes.io/projected/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8-kube-api-access-nqrl7\") pod \"6474e1da-cfe0-4fc5-aec3-521ea1e4dea8\" (UID: \"6474e1da-cfe0-4fc5-aec3-521ea1e4dea8\") "
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.405601    1347 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6474e1da-cfe0-4fc5-aec3-521ea1e4dea8" (UID: "6474e1da-cfe0-4fc5-aec3-521ea1e4dea8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.409318    1347 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8-kube-api-access-nqrl7" (OuterVolumeSpecName: "kube-api-access-nqrl7") pod "6474e1da-cfe0-4fc5-aec3-521ea1e4dea8" (UID: "6474e1da-cfe0-4fc5-aec3-521ea1e4dea8"). InnerVolumeSpecName "kube-api-access-nqrl7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.503090    1347 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqrl7\" (UniqueName: \"kubernetes.io/projected/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8-kube-api-access-nqrl7\") on node \"addons-241374\" DevicePath \"\""
	Jan 08 20:14:31 addons-241374 kubelet[1347]: I0108 20:14:31.503122    1347 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8-webhook-cert\") on node \"addons-241374\" DevicePath \"\""
	Jan 08 20:14:33 addons-241374 kubelet[1347]: I0108 20:14:33.026445    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6474e1da-cfe0-4fc5-aec3-521ea1e4dea8" path="/var/lib/kubelet/pods/6474e1da-cfe0-4fc5-aec3-521ea1e4dea8/volumes"
	
	
	==> storage-provisioner [8e198d56423a637c2f0637754d6a740667c4eb2432c75822a5e6454b2cefb3d9] <==
	I0108 20:11:41.160616       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:11:41.194049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:11:41.194141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:11:41.222164       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:11:41.223993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-241374_7686f642-db07-4c9c-a8d7-8aedf6230a02!
	I0108 20:11:41.231327       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5bcb074-8f07-49f9-a4a7-63c731140294", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-241374_7686f642-db07-4c9c-a8d7-8aedf6230a02 became leader
	I0108 20:11:41.324597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-241374_7686f642-db07-4c9c-a8d7-8aedf6230a02!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-241374 -n addons-241374
helpers_test.go:261: (dbg) Run:  kubectl --context addons-241374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (36.30s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (18.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-819954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 20:18:06.877517  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-819954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (16.058411365s)

                                                
                                                
-- stdout --
	* [functional-819954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node functional-819954 in cluster functional-819954
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "functional-819954" container ...
	* Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:18:14.282801  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	E0108 20:18:14.283181  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.283521  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.283981  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.284238  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.284480  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.296521  680424 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.436656  680424 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-819954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 16.05862825s for "functional-819954" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-819954
helpers_test.go:235: (dbg) docker inspect functional-819954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f",
	        "Created": "2024-01-08T20:16:51.434229781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 676751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:16:51.739441604Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/hosts",
	        "LogPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f-json.log",
	        "Name": "/functional-819954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-819954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-819954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49-init/diff:/var/lib/docker/overlay2/5440a5a336c464ed564efc18a632104b770481b7cc483f7cadb6269a7b019538/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-819954",
	                "Source": "/var/lib/docker/volumes/functional-819954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-819954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-819954",
	                "name.minikube.sigs.k8s.io": "functional-819954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1aba25e72c9605376ff8b36e23f3db3d6e51d2cc787f128481560252c5247b9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1aba25e72c9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-819954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5206d6c086de",
	                        "functional-819954"
	                    ],
	                    "NetworkID": "fcf0c895894e67ac49df7e64ee5509b677fcbc3ba93183dd55f88ceb52f4a2e1",
	                    "EndpointID": "e8f7119525a658341fb5089e073b105a4cf2d1cbe7becfb0bb15fe47547f2f19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-819954 -n functional-819954
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-819954 -n functional-819954: exit status 2 (352.102457ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 logs -n 25: (1.632785011s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-917904                                                         | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:17 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | minikube-local-cache-test:functional-819954                              |                   |         |         |                     |                     |
	| cache   | functional-819954 cache delete                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | minikube-local-cache-test:functional-819954                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	| ssh     | functional-819954 ssh sudo                                               | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-819954                                                        | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh                                                    | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-819954 cache reload                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	| ssh     | functional-819954 ssh                                                    | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-819954 kubectl --                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | --context functional-819954                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:17:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:17:58.464235  680424 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:17:58.464403  680424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:17:58.464406  680424 out.go:309] Setting ErrFile to fd 2...
	I0108 20:17:58.464411  680424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:17:58.464683  680424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:17:58.465115  680424 out.go:303] Setting JSON to false
	I0108 20:17:58.466149  680424 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10819,"bootTime":1704734260,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:17:58.466231  680424 start.go:138] virtualization:  
	I0108 20:17:58.469193  680424 out.go:177] * [functional-819954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:17:58.471386  680424 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:17:58.471535  680424 notify.go:220] Checking for updates...
	I0108 20:17:58.475541  680424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:17:58.477975  680424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:17:58.480192  680424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:17:58.482293  680424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:17:58.484206  680424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:17:58.487083  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:17:58.487175  680424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:17:58.513635  680424 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:17:58.513753  680424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:17:58.641973  680424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:17:58.630894254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:17:58.642062  680424 docker.go:295] overlay module found
	I0108 20:17:58.646069  680424 out.go:177] * Using the docker driver based on existing profile
	I0108 20:17:58.647956  680424 start.go:298] selected driver: docker
	I0108 20:17:58.647974  680424 start.go:902] validating driver "docker" against &{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:17:58.648087  680424 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:17:58.648190  680424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:17:58.764235  680424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:17:58.753935434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:17:58.764631  680424 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:17:58.764674  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:17:58.764699  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:17:58.764709  680424 start_flags.go:323] config:
	{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:17:58.767714  680424 out.go:177] * Starting control plane node functional-819954 in cluster functional-819954
	I0108 20:17:58.769710  680424 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:17:58.771802  680424 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:17:58.773761  680424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:17:58.773815  680424 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0108 20:17:58.773822  680424 cache.go:56] Caching tarball of preloaded images
	I0108 20:17:58.773853  680424 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:17:58.773906  680424 preload.go:174] Found /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0108 20:17:58.773915  680424 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0108 20:17:58.774026  680424 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/config.json ...
	I0108 20:17:58.791691  680424 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:17:58.791705  680424 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:17:58.791724  680424 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:17:58.791773  680424 start.go:365] acquiring machines lock for functional-819954: {Name:mk392846689e434ec56ab3789693926c63d9539d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:17:58.791839  680424 start.go:369] acquired machines lock for "functional-819954" in 43.249µs
	I0108 20:17:58.791858  680424 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:17:58.791863  680424 fix.go:54] fixHost starting: 
	I0108 20:17:58.792216  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:17:58.810342  680424 fix.go:102] recreateIfNeeded on functional-819954: state=Running err=<nil>
	W0108 20:17:58.810361  680424 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:17:58.812153  680424 out.go:177] * Updating the running docker "functional-819954" container ...
	I0108 20:17:58.814044  680424 machine.go:88] provisioning docker machine ...
	I0108 20:17:58.814063  680424 ubuntu.go:169] provisioning hostname "functional-819954"
	I0108 20:17:58.814146  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:58.835933  680424 main.go:141] libmachine: Using SSH client type: native
	I0108 20:17:58.836353  680424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I0108 20:17:58.836367  680424 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-819954 && echo "functional-819954" | sudo tee /etc/hostname
	I0108 20:17:58.992081  680424 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-819954
	
	I0108 20:17:58.992153  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:59.013658  680424 main.go:141] libmachine: Using SSH client type: native
	I0108 20:17:59.014098  680424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I0108 20:17:59.014115  680424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-819954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-819954/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-819954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:17:59.154381  680424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:17:59.154397  680424 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-649468/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-649468/.minikube}
	I0108 20:17:59.154414  680424 ubuntu.go:177] setting up certificates
	I0108 20:17:59.154438  680424 provision.go:83] configureAuth start
	I0108 20:17:59.154511  680424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-819954
	I0108 20:17:59.173236  680424 provision.go:138] copyHostCerts
	I0108 20:17:59.173351  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem, removing ...
	I0108 20:17:59.173360  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem
	I0108 20:17:59.173435  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem (1679 bytes)
	I0108 20:17:59.173535  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem, removing ...
	I0108 20:17:59.173539  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem
	I0108 20:17:59.173564  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem (1078 bytes)
	I0108 20:17:59.173613  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem, removing ...
	I0108 20:17:59.173617  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem
	I0108 20:17:59.173639  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem (1123 bytes)
	I0108 20:17:59.173679  680424 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem org=jenkins.functional-819954 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-819954]
	I0108 20:17:59.825031  680424 provision.go:172] copyRemoteCerts
	I0108 20:17:59.825111  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:17:59.825149  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:59.843393  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:17:59.944001  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:17:59.972873  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 20:18:00.003896  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:18:00.071330  680424 provision.go:86] duration metric: configureAuth took 916.876467ms
	I0108 20:18:00.071352  680424 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:18:00.071582  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:00.071587  680424 machine.go:91] provisioned docker machine in 1.257535756s
	I0108 20:18:00.071594  680424 start.go:300] post-start starting for "functional-819954" (driver="docker")
	I0108 20:18:00.071604  680424 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:18:00.071658  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:18:00.071708  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.111558  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.284092  680424 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:18:00.292369  680424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:18:00.292396  680424 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:18:00.292408  680424 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:18:00.292415  680424 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:18:00.292425  680424 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/addons for local assets ...
	I0108 20:18:00.292501  680424 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/files for local assets ...
	I0108 20:18:00.292588  680424 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem -> 6548052.pem in /etc/ssl/certs
	I0108 20:18:00.292692  680424 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts -> hosts in /etc/test/nested/copy/654805
	I0108 20:18:00.292748  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/654805
	I0108 20:18:00.336333  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:18:00.375975  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts --> /etc/test/nested/copy/654805/hosts (40 bytes)
	I0108 20:18:00.413765  680424 start.go:303] post-start completed in 342.154302ms
	I0108 20:18:00.413846  680424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:18:00.413897  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.435615  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.535985  680424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:18:00.542312  680424 fix.go:56] fixHost completed within 1.750442372s
	I0108 20:18:00.542327  680424 start.go:83] releasing machines lock for "functional-819954", held for 1.750481371s
	I0108 20:18:00.542412  680424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-819954
	I0108 20:18:00.560939  680424 ssh_runner.go:195] Run: cat /version.json
	I0108 20:18:00.561082  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.561208  680424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:18:00.561267  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.580669  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.606110  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.817109  680424 ssh_runner.go:195] Run: systemctl --version
	I0108 20:18:00.822726  680424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:18:00.828448  680424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 20:18:00.851444  680424 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:18:00.851532  680424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:18:00.862902  680424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 20:18:00.862920  680424 start.go:475] detecting cgroup driver to use...
	I0108 20:18:00.862954  680424 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:18:00.863015  680424 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:18:00.879222  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:18:00.893583  680424 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:18:00.893651  680424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:18:00.909698  680424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:18:00.923688  680424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:18:01.061135  680424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:18:01.192875  680424 docker.go:233] disabling docker service ...
	I0108 20:18:01.192936  680424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:18:01.209575  680424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:18:01.224208  680424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:18:01.359344  680424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:18:01.483635  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:18:01.500466  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:18:01.522159  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 20:18:01.534479  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 20:18:01.547530  680424 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 20:18:01.547594  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 20:18:01.561612  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:18:01.574956  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 20:18:01.587922  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:18:01.600693  680424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:18:01.612933  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 20:18:01.625631  680424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:18:01.636493  680424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:18:01.647075  680424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:18:01.770815  680424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:18:01.989085  680424 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:18:01.989165  680424 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:18:01.994608  680424 start.go:543] Will wait 60s for crictl version
	I0108 20:18:01.994663  680424 ssh_runner.go:195] Run: which crictl
	I0108 20:18:01.999881  680424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:18:02.046798  680424 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0108 20:18:02.046868  680424 ssh_runner.go:195] Run: containerd --version
	I0108 20:18:02.077472  680424 ssh_runner.go:195] Run: containerd --version
	I0108 20:18:02.113852  680424 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0108 20:18:02.116279  680424 cli_runner.go:164] Run: docker network inspect functional-819954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:18:02.140354  680424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:18:02.148216  680424 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0108 20:18:02.150174  680424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:18:02.150272  680424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:18:02.191069  680424 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:18:02.191081  680424 containerd.go:518] Images already preloaded, skipping extraction
	I0108 20:18:02.191137  680424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:18:02.234636  680424 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:18:02.234648  680424 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:18:02.234708  680424 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:18:02.279978  680424 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0108 20:18:02.280004  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:18:02.280014  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:18:02.280023  680424 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:18:02.280044  680424 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-819954 NodeName:functional-819954 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:18:02.280201  680424 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-819954"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:18:02.280282  680424 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-819954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0108 20:18:02.280354  680424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:18:02.293102  680424 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:18:02.293177  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:18:02.307327  680424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0108 20:18:02.330218  680424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:18:02.352572  680424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0108 20:18:02.375049  680424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:18:02.379907  680424 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954 for IP: 192.168.49.2
	I0108 20:18:02.379939  680424 certs.go:190] acquiring lock for shared ca certs: {Name:mk8baa4ad3918f12788abe17f587583afd1a9c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:02.380074  680424 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key
	I0108 20:18:02.380109  680424 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key
	I0108 20:18:02.380182  680424 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.key
	I0108 20:18:02.380233  680424 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.key.dd3b5fb2
	I0108 20:18:02.380269  680424 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.key
	I0108 20:18:02.380373  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem (1338 bytes)
	W0108 20:18:02.380399  680424 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805_empty.pem, impossibly tiny 0 bytes
	I0108 20:18:02.380407  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:18:02.380433  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:18:02.380462  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:18:02.380485  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem (1679 bytes)
	I0108 20:18:02.380528  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:18:02.381312  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:18:02.411349  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 20:18:02.442268  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:18:02.473820  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:18:02.504402  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:18:02.536771  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:18:02.570637  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:18:02.602528  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 20:18:02.632604  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /usr/share/ca-certificates/6548052.pem (1708 bytes)
	I0108 20:18:02.662769  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:18:02.694837  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem --> /usr/share/ca-certificates/654805.pem (1338 bytes)
	I0108 20:18:02.724784  680424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:18:02.755771  680424 ssh_runner.go:195] Run: openssl version
	I0108 20:18:02.763751  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6548052.pem && ln -fs /usr/share/ca-certificates/6548052.pem /etc/ssl/certs/6548052.pem"
	I0108 20:18:02.776125  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.781285  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.781351  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.790188  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6548052.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:18:02.801640  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:18:02.813740  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.818909  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.818963  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.827588  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:18:02.838775  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/654805.pem && ln -fs /usr/share/ca-certificates/654805.pem /etc/ssl/certs/654805.pem"
	I0108 20:18:02.850919  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.855653  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.855707  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.864321  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/654805.pem /etc/ssl/certs/51391683.0"
	I0108 20:18:02.875750  680424 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:18:02.880517  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 20:18:02.889564  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 20:18:02.897996  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 20:18:02.906530  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 20:18:02.915138  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 20:18:02.923827  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 20:18:02.932403  680424 kubeadm.go:404] StartCluster: {Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:18:02.932495  680424 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 20:18:02.932558  680424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:18:02.974482  680424 cri.go:89] found id: "d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1"
	I0108 20:18:02.974500  680424 cri.go:89] found id: "4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47"
	I0108 20:18:02.974504  680424 cri.go:89] found id: "bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419"
	I0108 20:18:02.974509  680424 cri.go:89] found id: "6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84"
	I0108 20:18:02.974512  680424 cri.go:89] found id: "d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	I0108 20:18:02.974516  680424 cri.go:89] found id: "23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48"
	I0108 20:18:02.974519  680424 cri.go:89] found id: "af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861"
	I0108 20:18:02.974523  680424 cri.go:89] found id: "09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c"
	I0108 20:18:02.974528  680424 cri.go:89] found id: "0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb"
	I0108 20:18:02.974534  680424 cri.go:89] found id: ""
	I0108 20:18:02.974599  680424 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 20:18:03.008491  680424 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c","pid":1274,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c/rootfs","created":"2024-01-08T20:17:05.382496107Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5d770ae7c05d7b13bc2e5621283713ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0a6b7c1f3
96e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb","pid":1232,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb/rootfs","created":"2024-01-08T20:17:05.30318414Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","io.kubernetes.cri.sandbox-name":"etcd-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5ddae75ae78b04ccb699098c29e5635"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48","pid":1314,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48/rootfs","created":"2024-01-08T20:17:05.451653659Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"94e52f63c4e823859e27d9606ecfb426"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","pid":1685,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3f
e3c2bae7fb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb/rootfs","created":"2024-01-08T20:17:27.40401502Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_1a29cdd1-3689-4c64-b1f6-78051dd0f4cd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a29cdd1-3689-4c64-b1f6-78051dd0f4cd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","pid":1798,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032/rootfs","created":"2024-01-08T20:17:27.902074703Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-8f54v_c8ee3b5c-77b8-49cd-be3b-2fed766a1681","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-8f54v","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c8ee3b5c-77b8-49cd-be3b-2fed766a1681"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47","pid":2131,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47/rootfs","created":"2024-01-08T20:17:40.442612365Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-kfq5h","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"096d598a-b3a8-447b-89e0-f8d6788334d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eafd65391bc08e5b8239b1fb2c4477d0c91
e31c6f20071050365768b2954f84","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84/rootfs","created":"2024-01-08T20:17:27.927929232Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","io.kubernetes.cri.sandbox-name":"kube-proxy-rkdqg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b744ff0b-b217-4f49-8af0-76952412ab2b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","pid":1158,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8f6b4ff0fbedc09da086c68c0
72b77e82fd7e632269f2d98c18c810891c441/rootfs","created":"2024-01-08T20:17:05.170169098Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-819954_f878c4636850ccf2e5b70c6db6ff0087","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f878c4636850ccf2e5b70c6db6ff0087"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861","pid":1325,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861/rootfs","created":"2024-01-08T20:17:05.487200444Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f878c4636850ccf2e5b70c6db6ff0087"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","pid":1192,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2410c96230109e2e646b7748ed2c4b653317ef44fb
c5b419932c4b3cc348f68/rootfs","created":"2024-01-08T20:17:05.232105635Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-819954_94e52f63c4e823859e27d9606ecfb426","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"94e52f63c4e823859e27d9606ecfb426"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419","pid":1917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419","roo
tfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419/rootfs","created":"2024-01-08T20:17:28.197457923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","io.kubernetes.cri.sandbox-name":"kindnet-8f54v","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c8ee3b5c-77b8-49cd-be3b-2fed766a1681"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1","pid":2944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d073a74d1e9ca42b70d578b403c76101a2144bfc923bc7898522
17c5a8b9cfd1/rootfs","created":"2024-01-08T20:17:58.630550052Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a29cdd1-3689-4c64-b1f6-78051dd0f4cd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","pid":1144,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d/rootfs","created":"2024-01-08T20:17:05.143485506Z","annotations":{"io.kubernetes.cri.co
ntainer-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-819954_5d770ae7c05d7b13bc2e5621283713ab","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5d770ae7c05d7b13bc2e5621283713ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","pid":2101,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c/roo
tfs","created":"2024-01-08T20:17:40.348140186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-kfq5h_096d598a-b3a8-447b-89e0-f8d6788334d5","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-kfq5h","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"096d598a-b3a8-447b-89e0-f8d6788334d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","pid":1133,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940/rootfs","created":"2024-01-08T20:17:05.11502994Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-819954_a5ddae75ae78b04ccb699098c29e5635","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5ddae75ae78b04ccb699098c29e5635"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","pid":1805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc
004c81ec46e82d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d/rootfs","created":"2024-01-08T20:17:27.837270213Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-rkdqg_b744ff0b-b217-4f49-8af0-76952412ab2b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-rkdqg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b744ff0b-b217-4f49-8af0-76952412ab2b"},"owner":"root"}]
	I0108 20:18:03.008835  680424 cri.go:126] list returned 16 containers
	I0108 20:18:03.008844  680424 cri.go:129] container: {ID:09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c Status:running}
	I0108 20:18:03.008863  680424 cri.go:135] skipping {09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c running}: state = "running", want "paused"
	I0108 20:18:03.008872  680424 cri.go:129] container: {ID:0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb Status:running}
	I0108 20:18:03.008878  680424 cri.go:135] skipping {0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb running}: state = "running", want "paused"
	I0108 20:18:03.008888  680424 cri.go:129] container: {ID:23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 Status:running}
	I0108 20:18:03.008894  680424 cri.go:135] skipping {23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 running}: state = "running", want "paused"
	I0108 20:18:03.008899  680424 cri.go:129] container: {ID:28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb Status:running}
	I0108 20:18:03.008905  680424 cri.go:131] skipping 28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb - not in ps
	I0108 20:18:03.008909  680424 cri.go:129] container: {ID:30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032 Status:running}
	I0108 20:18:03.008915  680424 cri.go:131] skipping 30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032 - not in ps
	I0108 20:18:03.008919  680424 cri.go:129] container: {ID:4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 Status:running}
	I0108 20:18:03.008925  680424 cri.go:135] skipping {4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 running}: state = "running", want "paused"
	I0108 20:18:03.008930  680424 cri.go:129] container: {ID:6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 Status:running}
	I0108 20:18:03.008936  680424 cri.go:135] skipping {6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 running}: state = "running", want "paused"
	I0108 20:18:03.008941  680424 cri.go:129] container: {ID:8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441 Status:running}
	I0108 20:18:03.008963  680424 cri.go:131] skipping 8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441 - not in ps
	I0108 20:18:03.008968  680424 cri.go:129] container: {ID:af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 Status:running}
	I0108 20:18:03.008976  680424 cri.go:135] skipping {af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 running}: state = "running", want "paused"
	I0108 20:18:03.008984  680424 cri.go:129] container: {ID:b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68 Status:running}
	I0108 20:18:03.009020  680424 cri.go:131] skipping b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68 - not in ps
	I0108 20:18:03.009025  680424 cri.go:129] container: {ID:bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 Status:running}
	I0108 20:18:03.009031  680424 cri.go:135] skipping {bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 running}: state = "running", want "paused"
	I0108 20:18:03.009038  680424 cri.go:129] container: {ID:d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 Status:running}
	I0108 20:18:03.009044  680424 cri.go:135] skipping {d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 running}: state = "running", want "paused"
	I0108 20:18:03.009049  680424 cri.go:129] container: {ID:e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d Status:running}
	I0108 20:18:03.009054  680424 cri.go:131] skipping e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d - not in ps
	I0108 20:18:03.009059  680424 cri.go:129] container: {ID:ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c Status:running}
	I0108 20:18:03.009064  680424 cri.go:131] skipping ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c - not in ps
	I0108 20:18:03.009068  680424 cri.go:129] container: {ID:f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940 Status:running}
	I0108 20:18:03.009073  680424 cri.go:131] skipping f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940 - not in ps
	I0108 20:18:03.009078  680424 cri.go:129] container: {ID:f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d Status:running}
	I0108 20:18:03.009084  680424 cri.go:131] skipping f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d - not in ps
	I0108 20:18:03.009148  680424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:18:03.027274  680424 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 20:18:03.027286  680424 kubeadm.go:636] restartCluster start
	I0108 20:18:03.027343  680424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 20:18:03.038603  680424 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:03.039158  680424 kubeconfig.go:92] found "functional-819954" server: "https://192.168.49.2:8441"
	I0108 20:18:03.040709  680424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 20:18:03.052615  680424 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-01-08 20:16:57.502159996 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-01-08 20:18:02.365661107 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0108 20:18:03.052624  680424 kubeadm.go:1135] stopping kube-system containers ...
	I0108 20:18:03.052641  680424 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 20:18:03.052710  680424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:18:03.103938  680424 cri.go:89] found id: "d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1"
	I0108 20:18:03.103951  680424 cri.go:89] found id: "4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47"
	I0108 20:18:03.103970  680424 cri.go:89] found id: "bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419"
	I0108 20:18:03.103974  680424 cri.go:89] found id: "6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84"
	I0108 20:18:03.103977  680424 cri.go:89] found id: "d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	I0108 20:18:03.103981  680424 cri.go:89] found id: "23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48"
	I0108 20:18:03.103984  680424 cri.go:89] found id: "af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861"
	I0108 20:18:03.103987  680424 cri.go:89] found id: "09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c"
	I0108 20:18:03.103991  680424 cri.go:89] found id: "0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb"
	I0108 20:18:03.103996  680424 cri.go:89] found id: ""
	I0108 20:18:03.104001  680424 cri.go:234] Stopping containers: [d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb]
	I0108 20:18:03.104060  680424 ssh_runner.go:195] Run: which crictl
	I0108 20:18:03.109069  680424 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb
	I0108 20:18:08.374347  680424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb: (5.265240873s)
	W0108 20:18:08.374414  680424 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb: Process exited with status 1
	stdout:
	d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1
	4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47
	bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419
	6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84
	
	stderr:
	E0108 20:18:08.370998    3382 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": not found" containerID="d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	time="2024-01-08T20:18:08Z" level=fatal msg="stopping the container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": not found"
	I0108 20:18:08.374497  680424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 20:18:08.457602  680424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:18:08.469679  680424 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 20:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 20:17 /etc/kubernetes/scheduler.conf
	
	I0108 20:18:08.469753  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0108 20:18:08.481377  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0108 20:18:08.492216  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0108 20:18:08.503457  680424 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:08.503514  680424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 20:18:08.513986  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0108 20:18:08.524732  680424 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:08.524786  680424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 20:18:08.535465  680424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:18:08.546706  680424 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 20:18:08.546723  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:08.609948  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:10.789213  680424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.179241477s)
	I0108 20:18:10.789231  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:10.997287  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.088107  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.185343  680424 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:18:11.185410  680424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:18:11.203104  680424 api_server.go:72] duration metric: took 17.758159ms to wait for apiserver process to appear ...
	I0108 20:18:11.203118  680424 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:18:11.203149  680424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0108 20:18:11.213592  680424 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0108 20:18:11.229628  680424 api_server.go:141] control plane version: v1.28.4
	I0108 20:18:11.229645  680424 api_server.go:131] duration metric: took 26.52153ms to wait for apiserver health ...
	I0108 20:18:11.229653  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:18:11.229659  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:18:11.231542  680424 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:18:11.233706  680424 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:18:11.238795  680424 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:18:11.238807  680424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:18:11.272031  680424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:18:11.665927  680424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:18:11.674252  680424 system_pods.go:59] 8 kube-system pods found
	I0108 20:18:11.674268  680424 system_pods.go:61] "coredns-5dd5756b68-kfq5h" [096d598a-b3a8-447b-89e0-f8d6788334d5] Running
	I0108 20:18:11.674272  680424 system_pods.go:61] "etcd-functional-819954" [7cfcab5f-d05b-43ce-aaf4-936977eda08c] Running
	I0108 20:18:11.674276  680424 system_pods.go:61] "kindnet-8f54v" [c8ee3b5c-77b8-49cd-be3b-2fed766a1681] Running
	I0108 20:18:11.674281  680424 system_pods.go:61] "kube-apiserver-functional-819954" [7c24bde5-5c62-443f-95c3-d23a713d71bd] Running
	I0108 20:18:11.674290  680424 system_pods.go:61] "kube-controller-manager-functional-819954" [c56d710c-a540-427b-9b64-031140796e4f] Running
	I0108 20:18:11.674294  680424 system_pods.go:61] "kube-proxy-rkdqg" [b744ff0b-b217-4f49-8af0-76952412ab2b] Running
	I0108 20:18:11.674299  680424 system_pods.go:61] "kube-scheduler-functional-819954" [c10ec52c-2a67-4ed9-80bc-7c592b59b99c] Running
	I0108 20:18:11.674306  680424 system_pods.go:61] "storage-provisioner" [1a29cdd1-3689-4c64-b1f6-78051dd0f4cd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 20:18:11.674312  680424 system_pods.go:74] duration metric: took 8.374624ms to wait for pod list to return data ...
	I0108 20:18:11.674320  680424 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:18:11.677575  680424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:18:11.677594  680424 node_conditions.go:123] node cpu capacity is 2
	I0108 20:18:11.677603  680424 node_conditions.go:105] duration metric: took 3.279392ms to run NodePressure ...
	I0108 20:18:11.677625  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.896459  680424 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 20:18:11.904865  680424 retry.go:31] will retry after 336.700738ms: kubelet not initialised
	I0108 20:18:12.249628  680424 kubeadm.go:787] kubelet initialised
	I0108 20:18:12.249640  680424 kubeadm.go:788] duration metric: took 353.167628ms waiting for restarted kubelet to initialise ...
	I0108 20:18:12.249656  680424 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:18:12.278777  680424 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.282756  680424 pod_ready.go:97] error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	I0108 20:18:14.282790  680424 pod_ready.go:81] duration metric: took 2.003985471s waiting for pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.282801  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	I0108 20:18:14.282822  680424 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283162  680424 pod_ready.go:97] error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283173  680424 pod_ready.go:81] duration metric: took 343.571µs waiting for pod "etcd-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283181  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283202  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283501  680424 pod_ready.go:97] error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283511  680424 pod_ready.go:81] duration metric: took 303.547µs waiting for pod "kube-apiserver-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283521  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283546  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283950  680424 pod_ready.go:97] error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283968  680424 pod_ready.go:81] duration metric: took 407.242µs waiting for pod "kube-controller-manager-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283981  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283996  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rkdqg" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.284223  680424 pod_ready.go:97] error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284231  680424 pod_ready.go:81] duration metric: took 230.332µs waiting for pod "kube-proxy-rkdqg" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.284238  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284251  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.284467  680424 pod_ready.go:97] error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284474  680424 pod_ready.go:81] duration metric: took 217.541µs waiting for pod "kube-scheduler-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.284480  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284496  680424 pod_ready.go:38] duration metric: took 2.03483066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:18:14.284510  680424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0108 20:18:14.293627  680424 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0108 20:18:14.293642  680424 kubeadm.go:640] restartCluster took 11.266351717s
	I0108 20:18:14.293648  680424 kubeadm.go:406] StartCluster complete in 11.361257745s
	I0108 20:18:14.293671  680424 settings.go:142] acquiring lock: {Name:mkb63cd96d7a856f465b0592d8a592dc849b8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:14.293726  680424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:18:14.294383  680424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/kubeconfig: {Name:mk40e5900c8ed31a9e7a0515010236c17752c8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:14.295469  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:18:14.295728  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:14.295766  680424 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:18:14.295825  680424 addons.go:69] Setting storage-provisioner=true in profile "functional-819954"
	I0108 20:18:14.295836  680424 addons.go:237] Setting addon storage-provisioner=true in "functional-819954"
	W0108 20:18:14.295841  680424 addons.go:246] addon storage-provisioner should already be in state true
	I0108 20:18:14.295896  680424 host.go:66] Checking if "functional-819954" exists ...
	I0108 20:18:14.296279  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	W0108 20:18:14.296510  680424 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-819954" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.296521  680424 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.296548  680424 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:18:14.301958  680424 out.go:177] * Verifying Kubernetes components...
	I0108 20:18:14.296882  680424 addons.go:69] Setting default-storageclass=true in profile "functional-819954"
	I0108 20:18:14.303886  680424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-819954"
	I0108 20:18:14.303956  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:18:14.304342  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:14.328200  680424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:18:14.330160  680424 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:18:14.330172  680424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:18:14.330234  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:14.350216  680424 addons.go:237] Setting addon default-storageclass=true in "functional-819954"
	W0108 20:18:14.350227  680424 addons.go:246] addon default-storageclass should already be in state true
	I0108 20:18:14.350248  680424 host.go:66] Checking if "functional-819954" exists ...
	I0108 20:18:14.350700  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:14.387943  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:14.403990  680424 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:18:14.404002  680424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:18:14.404061  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:14.425786  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	E0108 20:18:14.436656  680424 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0108 20:18:14.436680  680424 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0108 20:18:14.436696  680424 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0108 20:18:14.436847  680424 node_ready.go:35] waiting up to 6m0s for node "functional-819954" to be "Ready" ...
	I0108 20:18:14.437355  680424 node_ready.go:53] error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.437365  680424 node_ready.go:38] duration metric: took 508.591µs waiting for node "functional-819954" to be "Ready" ...
	I0108 20:18:14.440079  680424 out.go:177] 
	W0108 20:18:14.441826  680424 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.441847  680424 out.go:239] * 
	W0108 20:18:14.443217  680424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:18:14.445094  680424 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5db7d28fe46b8       04b4eaa3d3db8       3 seconds ago        Running             kindnet-cni               1                   30df59d2570db       kindnet-8f54v
	21ef5a21b398d       3ca3ca488cf13       3 seconds ago        Running             kube-proxy                1                   f9bda501df60f       kube-proxy-rkdqg
	72f1ea2bc29cc       ba04bb24b9575       3 seconds ago        Running             storage-provisioner       2                   28f17cb160b77       storage-provisioner
	158dd083763bc       97e04611ad434       3 seconds ago        Running             coredns                   1                   ef8377f1664c9       coredns-5dd5756b68-kfq5h
	1a740457ab4ba       04b4c447bb9d4       3 seconds ago        Exited              kube-apiserver            1                   908b9cbce2bdf       kube-apiserver-functional-819954
	d073a74d1e9ca       ba04bb24b9575       17 seconds ago       Exited              storage-provisioner       1                   28f17cb160b77       storage-provisioner
	4d6d515dfb3db       97e04611ad434       35 seconds ago       Exited              coredns                   0                   ef8377f1664c9       coredns-5dd5756b68-kfq5h
	bdd9436308352       04b4eaa3d3db8       47 seconds ago       Exited              kindnet-cni               0                   30df59d2570db       kindnet-8f54v
	6eafd65391bc0       3ca3ca488cf13       47 seconds ago       Exited              kube-proxy                0                   f9bda501df60f       kube-proxy-rkdqg
	23cacc81bdb79       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   b2410c9623010       kube-controller-manager-functional-819954
	af97d71bc53af       05c284c929889       About a minute ago   Running             kube-scheduler            0                   8d8f6b4ff0fbe       kube-scheduler-functional-819954
	0a6b7c1f396e6       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   f16a2df5f2aeb       etcd-functional-819954
	
	
	==> containerd <==
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752518809Z" level=info msg="shim disconnected" id=1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752787640Z" level=warning msg="cleaning up after shim disconnected" id=1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550 namespace=k8s.io
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752905998Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.776524519Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3918 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.796934046Z" level=info msg="StartContainer for \"21ef5a21b398de4c92f7c23b2a7438f018e4b9cb6dd4003c3394b3afab228ab7\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.241548449Z" level=info msg="StopContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" with timeout 2 (s)"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.242121671Z" level=info msg="Stop container \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" with signal terminated"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272676303Z" level=info msg="shim disconnected" id=e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272871986Z" level=warning msg="cleaning up after shim disconnected" id=e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d namespace=k8s.io
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272892966Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.294244263Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321472020Z" level=info msg="shim disconnected" id=09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321805301Z" level=warning msg="cleaning up after shim disconnected" id=09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c namespace=k8s.io
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321837958Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.333361005Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4114 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.336100709Z" level=info msg="StopContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.337272327Z" level=info msg="StopPodSandbox for \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.337466344Z" level=info msg="Container to stop \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.341124236Z" level=info msg="TearDown network for sandbox \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\" successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.341240715Z" level=info msg="StopPodSandbox for \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.353288312Z" level=info msg="RemoveContainer for \"c0ab552c13365935acc32ea8138dac9c7273050ed9227c0a302aa3567b3c95af\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.360653188Z" level=info msg="RemoveContainer for \"c0ab552c13365935acc32ea8138dac9c7273050ed9227c0a302aa3567b3c95af\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.362379066Z" level=info msg="RemoveContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.367066546Z" level=info msg="RemoveContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.367952997Z" level=error msg="ContainerStatus for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\": not found"
	
	
	==> coredns [158dd083763bcd5814de3a45796be388b9d8354125ae14dd81467887a246f40b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34838 - 17254 "HINFO IN 8829591644871294786.2221726687050420815. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015153903s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36655 - 51930 "HINFO IN 1864645754049744283.7379797473336989881. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012250114s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000867] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001057] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001228] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +0.003300] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=000000007aebacca
	[  +0.001230] FS-Cache: O-key=[8] 'e63a5c0100000000'
	[  +0.000802] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000001ba12843
	[  +0.001177] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +2.625464] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001086] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=0000000019c2b840
	[  +0.001196] FS-Cache: O-key=[8] 'e53a5c0100000000'
	[  +0.000801] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001152] FS-Cache: N-key=[8] 'e53a5c0100000000'
	[  +0.329983] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001107] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000aa56b1d3
	[  +0.001269] FS-Cache: O-key=[8] 'ee3a5c0100000000'
	[  +0.000826] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000003b0b7e1f
	[  +0.001169] FS-Cache: N-key=[8] 'ee3a5c0100000000'
	[Jan 8 19:41] hrtimer: interrupt took 4780855 ns
	
	
	==> etcd [0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb] <==
	{"level":"info","ts":"2024-01-08T20:17:05.460103Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-08T20:17:05.460631Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470221Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470294Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470305Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-08T20:17:05.470715Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-08T20:17:05.617396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.618496Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.629366Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-819954 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:17:05.633162Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633264Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633299Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:17:05.633541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:17:05.63369Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:17:05.633801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:17:05.634334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:17:05.641975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 20:18:15 up  3:00,  0 users,  load average: 1.02, 1.15, 1.41
	Linux functional-819954 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [5db7d28fe46b8deef1dfbdb0cafda5dacd79814ee5e68f8967e2918879074683] <==
	I0108 20:18:12.806820       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:18:12.807086       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:18:12.807315       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:18:12.897158       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:18:12.897363       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:18:13.210331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:18:13.210371       1 main.go:227] handling current node
	
	
	==> kindnet [bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419] <==
	I0108 20:17:28.306230       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:17:28.306523       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:17:28.306745       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:17:28.306840       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:17:28.306960       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:17:28.797755       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:28.797794       1 main.go:227] handling current node
	I0108 20:17:38.804892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:38.804922       1 main.go:227] handling current node
	I0108 20:17:48.815546       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:48.815578       1 main.go:227] handling current node
	I0108 20:17:58.826405       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:58.826506       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550] <==
	I0108 20:18:12.628602       1 options.go:220] external host was not specified, using 192.168.49.2
	I0108 20:18:12.630349       1 server.go:148] Version: v1.28.4
	I0108 20:18:12.630375       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0108 20:18:12.630625       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48] <==
	I0108 20:17:25.473771       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0108 20:17:25.490750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.064392ms"
	I0108 20:17:25.493959       1 shared_informer.go:318] Caches are synced for daemon sets
	I0108 20:17:25.517482       1 shared_informer.go:318] Caches are synced for attach detach
	I0108 20:17:25.553343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.538152ms"
	I0108 20:17:25.593727       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8f54v"
	I0108 20:17:25.604908       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rkdqg"
	I0108 20:17:25.639245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.807962ms"
	I0108 20:17:25.640124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="275.337µs"
	I0108 20:17:25.805998       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0108 20:17:25.879048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cgl4x"
	I0108 20:17:25.901197       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 20:17:25.901352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.150922ms"
	I0108 20:17:25.913471       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 20:17:25.913502       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0108 20:17:25.922245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.844787ms"
	I0108 20:17:25.922325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.706µs"
	I0108 20:17:26.994510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.738µs"
	I0108 20:17:27.013443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.764µs"
	I0108 20:17:40.517212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.753µs"
	I0108 20:17:41.531172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.361473ms"
	I0108 20:17:41.531382       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0108 20:17:41.532615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.361µs"
	I0108 20:18:12.301322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.88429ms"
	I0108 20:18:12.301498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.235µs"
	
	
	==> kube-proxy [21ef5a21b398de4c92f7c23b2a7438f018e4b9cb6dd4003c3394b3afab228ab7] <==
	I0108 20:18:12.896636       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:18:12.901243       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:18:12.901464       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:18:12.901550       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:18:12.901725       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:18:12.902115       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:18:12.904468       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:18:12.906948       1 config.go:188] "Starting service config controller"
	I0108 20:18:12.911123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:18:12.907113       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:18:12.911167       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:18:12.907811       1 config.go:315] "Starting node config controller"
	I0108 20:18:12.911184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:18:13.011357       1 shared_informer.go:318] Caches are synced for node config
	I0108 20:18:13.011399       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:18:13.011426       1 shared_informer.go:318] Caches are synced for endpoint slice config
	W0108 20:18:13.287280       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:13.287503       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:13.287654       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:14.138741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.138805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.184373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.184435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.720926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.720975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84] <==
	I0108 20:17:27.988534       1 server_others.go:69] "Using iptables proxy"
	I0108 20:17:28.010073       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 20:17:28.059460       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:17:28.071657       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:17:28.071704       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:17:28.071712       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:17:28.071795       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:17:28.072079       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:17:28.072090       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:17:28.073821       1 config.go:188] "Starting service config controller"
	I0108 20:17:28.073870       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:17:28.073917       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:17:28.073922       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:17:28.075775       1 config.go:315] "Starting node config controller"
	I0108 20:17:28.075791       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:17:28.174021       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:17:28.174074       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:17:28.176053       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861] <==
	W0108 20:17:09.284956       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:17:09.284982       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:17:09.285114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:17:09.285141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:17:09.285321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:17:09.285344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:17:09.285465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:17:09.285531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:17:09.285618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:17:09.285674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:17:09.285735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:17:09.285755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 20:17:10.107717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:17:10.108063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 20:17:10.129275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.129536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:17:10.203396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.203638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:17:10.220429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:17:10.220655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:17:10.354507       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:17:10.354736       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:17:10.394755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.394807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0108 20:17:12.772331       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.229240    3571 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5d770ae7c05d7b13bc2e5621283713ab" path="/var/lib/kubelet/pods/5d770ae7c05d7b13bc2e5621283713ab/volumes"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.306223    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.306466    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.306627    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.306823    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.307035    3571 status_manager.go:853] "Failed to get status for pod" podUID="c8ee3b5c-77b8-49cd-be3b-2fed766a1681" pod="kube-system/kindnet-8f54v" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-8f54v\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.307210    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.307379    3571 status_manager.go:853] "Failed to get status for pod" podUID="27b4a77c3ebefa78b9f28bd7e336085d" pod="kube-system/kube-apiserver-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.370510    3571 scope.go:117] "RemoveContainer" containerID="1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: E0108 20:18:15.371112    3571 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-819954_kube-system(27b4a77c3ebefa78b9f28bd7e336085d)\"" pod="kube-system/kube-apiserver-functional-819954" podUID="27b4a77c3ebefa78b9f28bd7e336085d"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371242    3571 status_manager.go:853] "Failed to get status for pod" podUID="27b4a77c3ebefa78b9f28bd7e336085d" pod="kube-system/kube-apiserver-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371472    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371680    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371853    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.372049    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.372228    3571 status_manager.go:853] "Failed to get status for pod" podUID="c8ee3b5c-77b8-49cd-be3b-2fed766a1681" pod="kube-system/kindnet-8f54v" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-8f54v\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.372416    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.226445    3571 status_manager.go:853] "Failed to get status for pod" podUID="27b4a77c3ebefa78b9f28bd7e336085d" pod="kube-system/kube-apiserver-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.226800    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.226993    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227153    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227308    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227469    3571 status_manager.go:853] "Failed to get status for pod" podUID="c8ee3b5c-77b8-49cd-be3b-2fed766a1681" pod="kube-system/kindnet-8f54v" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-8f54v\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227638    3571 status_manager.go:853] "Failed to get status for pod" podUID="f878c4636850ccf2e5b70c6db6ff0087" pod="kube-system/kube-scheduler-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227817    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> storage-provisioner [72f1ea2bc29cc4386d42b1b63cf02ae0fe73685627fd8d9cd52eea91edf7d50c] <==
	I0108 20:18:12.751809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:18:12.769640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:18:12.769735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0108 20:18:16.226427       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1] <==
	I0108 20:17:58.677934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:17:58.713738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:17:58.714002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:17:58.723448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:17:58.725753       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9!
	I0108 20:17:58.727292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca438388-635f-4015-b991-0cb05b966748", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9 became leader
	I0108 20:17:58.826378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:18:15.835921  681856 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-819954 -n functional-819954
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-819954 -n functional-819954: exit status 2 (340.096716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-819954" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (18.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-819954 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-819954 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (69.473369ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-819954 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-819954
helpers_test.go:235: (dbg) docker inspect functional-819954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f",
	        "Created": "2024-01-08T20:16:51.434229781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 676751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:16:51.739441604Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/hosts",
	        "LogPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f-json.log",
	        "Name": "/functional-819954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-819954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-819954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49-init/diff:/var/lib/docker/overlay2/5440a5a336c464ed564efc18a632104b770481b7cc483f7cadb6269a7b019538/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-819954",
	                "Source": "/var/lib/docker/volumes/functional-819954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-819954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-819954",
	                "name.minikube.sigs.k8s.io": "functional-819954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1aba25e72c9605376ff8b36e23f3db3d6e51d2cc787f128481560252c5247b9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1aba25e72c9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-819954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5206d6c086de",
	                        "functional-819954"
	                    ],
	                    "NetworkID": "fcf0c895894e67ac49df7e64ee5509b677fcbc3ba93183dd55f88ceb52f4a2e1",
	                    "EndpointID": "e8f7119525a658341fb5089e073b105a4cf2d1cbe7becfb0bb15fe47547f2f19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-819954 -n functional-819954
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-819954 -n functional-819954: exit status 2 (347.452132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 logs -n 25: (1.626010869s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-917904 --log_dir                                                  | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | /tmp/nospam-917904 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-917904                                                         | nospam-917904     | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:17 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-819954 cache add                                              | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | minikube-local-cache-test:functional-819954                              |                   |         |         |                     |                     |
	| cache   | functional-819954 cache delete                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | minikube-local-cache-test:functional-819954                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	| ssh     | functional-819954 ssh sudo                                               | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-819954                                                        | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh                                                    | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-819954 cache reload                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	| ssh     | functional-819954 ssh                                                    | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-819954 kubectl --                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | --context functional-819954                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:17:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:17:58.464235  680424 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:17:58.464403  680424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:17:58.464406  680424 out.go:309] Setting ErrFile to fd 2...
	I0108 20:17:58.464411  680424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:17:58.464683  680424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:17:58.465115  680424 out.go:303] Setting JSON to false
	I0108 20:17:58.466149  680424 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10819,"bootTime":1704734260,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:17:58.466231  680424 start.go:138] virtualization:  
	I0108 20:17:58.469193  680424 out.go:177] * [functional-819954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:17:58.471386  680424 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:17:58.471535  680424 notify.go:220] Checking for updates...
	I0108 20:17:58.475541  680424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:17:58.477975  680424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:17:58.480192  680424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:17:58.482293  680424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:17:58.484206  680424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:17:58.487083  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:17:58.487175  680424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:17:58.513635  680424 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:17:58.513753  680424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:17:58.641973  680424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:17:58.630894254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:17:58.642062  680424 docker.go:295] overlay module found
	I0108 20:17:58.646069  680424 out.go:177] * Using the docker driver based on existing profile
	I0108 20:17:58.647956  680424 start.go:298] selected driver: docker
	I0108 20:17:58.647974  680424 start.go:902] validating driver "docker" against &{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:17:58.648087  680424 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:17:58.648190  680424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:17:58.764235  680424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:17:58.753935434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:17:58.764631  680424 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:17:58.764674  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:17:58.764699  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:17:58.764709  680424 start_flags.go:323] config:
	{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:17:58.767714  680424 out.go:177] * Starting control plane node functional-819954 in cluster functional-819954
	I0108 20:17:58.769710  680424 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:17:58.771802  680424 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:17:58.773761  680424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:17:58.773815  680424 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0108 20:17:58.773822  680424 cache.go:56] Caching tarball of preloaded images
	I0108 20:17:58.773853  680424 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:17:58.773906  680424 preload.go:174] Found /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0108 20:17:58.773915  680424 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0108 20:17:58.774026  680424 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/config.json ...
	I0108 20:17:58.791691  680424 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:17:58.791705  680424 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:17:58.791724  680424 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:17:58.791773  680424 start.go:365] acquiring machines lock for functional-819954: {Name:mk392846689e434ec56ab3789693926c63d9539d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:17:58.791839  680424 start.go:369] acquired machines lock for "functional-819954" in 43.249µs
	I0108 20:17:58.791858  680424 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:17:58.791863  680424 fix.go:54] fixHost starting: 
	I0108 20:17:58.792216  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:17:58.810342  680424 fix.go:102] recreateIfNeeded on functional-819954: state=Running err=<nil>
	W0108 20:17:58.810361  680424 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:17:58.812153  680424 out.go:177] * Updating the running docker "functional-819954" container ...
	I0108 20:17:58.814044  680424 machine.go:88] provisioning docker machine ...
	I0108 20:17:58.814063  680424 ubuntu.go:169] provisioning hostname "functional-819954"
	I0108 20:17:58.814146  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:58.835933  680424 main.go:141] libmachine: Using SSH client type: native
	I0108 20:17:58.836353  680424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I0108 20:17:58.836367  680424 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-819954 && echo "functional-819954" | sudo tee /etc/hostname
	I0108 20:17:58.992081  680424 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-819954
	
	I0108 20:17:58.992153  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:59.013658  680424 main.go:141] libmachine: Using SSH client type: native
	I0108 20:17:59.014098  680424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I0108 20:17:59.014115  680424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-819954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-819954/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-819954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:17:59.154381  680424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:17:59.154397  680424 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-649468/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-649468/.minikube}
	I0108 20:17:59.154414  680424 ubuntu.go:177] setting up certificates
	I0108 20:17:59.154438  680424 provision.go:83] configureAuth start
	I0108 20:17:59.154511  680424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-819954
	I0108 20:17:59.173236  680424 provision.go:138] copyHostCerts
	I0108 20:17:59.173351  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem, removing ...
	I0108 20:17:59.173360  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem
	I0108 20:17:59.173435  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem (1679 bytes)
	I0108 20:17:59.173535  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem, removing ...
	I0108 20:17:59.173539  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem
	I0108 20:17:59.173564  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem (1078 bytes)
	I0108 20:17:59.173613  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem, removing ...
	I0108 20:17:59.173617  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem
	I0108 20:17:59.173639  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem (1123 bytes)
	I0108 20:17:59.173679  680424 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem org=jenkins.functional-819954 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-819954]
	I0108 20:17:59.825031  680424 provision.go:172] copyRemoteCerts
	I0108 20:17:59.825111  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:17:59.825149  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:59.843393  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:17:59.944001  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:17:59.972873  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 20:18:00.003896  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:18:00.071330  680424 provision.go:86] duration metric: configureAuth took 916.876467ms
	I0108 20:18:00.071352  680424 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:18:00.071582  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:00.071587  680424 machine.go:91] provisioned docker machine in 1.257535756s
	I0108 20:18:00.071594  680424 start.go:300] post-start starting for "functional-819954" (driver="docker")
	I0108 20:18:00.071604  680424 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:18:00.071658  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:18:00.071708  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.111558  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.284092  680424 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:18:00.292369  680424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:18:00.292396  680424 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:18:00.292408  680424 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:18:00.292415  680424 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:18:00.292425  680424 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/addons for local assets ...
	I0108 20:18:00.292501  680424 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/files for local assets ...
	I0108 20:18:00.292588  680424 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem -> 6548052.pem in /etc/ssl/certs
	I0108 20:18:00.292692  680424 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts -> hosts in /etc/test/nested/copy/654805
	I0108 20:18:00.292748  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/654805
	I0108 20:18:00.336333  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:18:00.375975  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts --> /etc/test/nested/copy/654805/hosts (40 bytes)
	I0108 20:18:00.413765  680424 start.go:303] post-start completed in 342.154302ms
	I0108 20:18:00.413846  680424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:18:00.413897  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.435615  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.535985  680424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:18:00.542312  680424 fix.go:56] fixHost completed within 1.750442372s
	I0108 20:18:00.542327  680424 start.go:83] releasing machines lock for "functional-819954", held for 1.750481371s
	I0108 20:18:00.542412  680424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-819954
	I0108 20:18:00.560939  680424 ssh_runner.go:195] Run: cat /version.json
	I0108 20:18:00.561082  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.561208  680424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:18:00.561267  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.580669  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.606110  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.817109  680424 ssh_runner.go:195] Run: systemctl --version
	I0108 20:18:00.822726  680424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:18:00.828448  680424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 20:18:00.851444  680424 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:18:00.851532  680424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:18:00.862902  680424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 20:18:00.862920  680424 start.go:475] detecting cgroup driver to use...
	I0108 20:18:00.862954  680424 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:18:00.863015  680424 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:18:00.879222  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:18:00.893583  680424 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:18:00.893651  680424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:18:00.909698  680424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:18:00.923688  680424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:18:01.061135  680424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:18:01.192875  680424 docker.go:233] disabling docker service ...
	I0108 20:18:01.192936  680424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:18:01.209575  680424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:18:01.224208  680424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:18:01.359344  680424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:18:01.483635  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:18:01.500466  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:18:01.522159  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 20:18:01.534479  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 20:18:01.547530  680424 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 20:18:01.547594  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 20:18:01.561612  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:18:01.574956  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 20:18:01.587922  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:18:01.600693  680424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:18:01.612933  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 20:18:01.625631  680424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:18:01.636493  680424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:18:01.647075  680424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:18:01.770815  680424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:18:01.989085  680424 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:18:01.989165  680424 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:18:01.994608  680424 start.go:543] Will wait 60s for crictl version
	I0108 20:18:01.994663  680424 ssh_runner.go:195] Run: which crictl
	I0108 20:18:01.999881  680424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:18:02.046798  680424 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0108 20:18:02.046868  680424 ssh_runner.go:195] Run: containerd --version
	I0108 20:18:02.077472  680424 ssh_runner.go:195] Run: containerd --version
	I0108 20:18:02.113852  680424 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0108 20:18:02.116279  680424 cli_runner.go:164] Run: docker network inspect functional-819954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:18:02.140354  680424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:18:02.148216  680424 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0108 20:18:02.150174  680424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:18:02.150272  680424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:18:02.191069  680424 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:18:02.191081  680424 containerd.go:518] Images already preloaded, skipping extraction
	I0108 20:18:02.191137  680424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:18:02.234636  680424 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:18:02.234648  680424 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:18:02.234708  680424 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:18:02.279978  680424 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0108 20:18:02.280004  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:18:02.280014  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:18:02.280023  680424 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:18:02.280044  680424 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-819954 NodeName:functional-819954 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:18:02.280201  680424 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-819954"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:18:02.280282  680424 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-819954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0108 20:18:02.280354  680424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:18:02.293102  680424 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:18:02.293177  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:18:02.307327  680424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0108 20:18:02.330218  680424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:18:02.352572  680424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0108 20:18:02.375049  680424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:18:02.379907  680424 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954 for IP: 192.168.49.2
	I0108 20:18:02.379939  680424 certs.go:190] acquiring lock for shared ca certs: {Name:mk8baa4ad3918f12788abe17f587583afd1a9c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:02.380074  680424 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key
	I0108 20:18:02.380109  680424 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key
	I0108 20:18:02.380182  680424 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.key
	I0108 20:18:02.380233  680424 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.key.dd3b5fb2
	I0108 20:18:02.380269  680424 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.key
	I0108 20:18:02.380373  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem (1338 bytes)
	W0108 20:18:02.380399  680424 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805_empty.pem, impossibly tiny 0 bytes
	I0108 20:18:02.380407  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:18:02.380433  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:18:02.380462  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:18:02.380485  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem (1679 bytes)
	I0108 20:18:02.380528  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:18:02.381312  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:18:02.411349  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 20:18:02.442268  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:18:02.473820  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:18:02.504402  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:18:02.536771  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:18:02.570637  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:18:02.602528  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 20:18:02.632604  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /usr/share/ca-certificates/6548052.pem (1708 bytes)
	I0108 20:18:02.662769  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:18:02.694837  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem --> /usr/share/ca-certificates/654805.pem (1338 bytes)
	I0108 20:18:02.724784  680424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:18:02.755771  680424 ssh_runner.go:195] Run: openssl version
	I0108 20:18:02.763751  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6548052.pem && ln -fs /usr/share/ca-certificates/6548052.pem /etc/ssl/certs/6548052.pem"
	I0108 20:18:02.776125  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.781285  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.781351  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.790188  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6548052.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:18:02.801640  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:18:02.813740  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.818909  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.818963  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.827588  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:18:02.838775  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/654805.pem && ln -fs /usr/share/ca-certificates/654805.pem /etc/ssl/certs/654805.pem"
	I0108 20:18:02.850919  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.855653  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.855707  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.864321  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/654805.pem /etc/ssl/certs/51391683.0"
	I0108 20:18:02.875750  680424 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:18:02.880517  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 20:18:02.889564  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 20:18:02.897996  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 20:18:02.906530  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 20:18:02.915138  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 20:18:02.923827  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 20:18:02.932403  680424 kubeadm.go:404] StartCluster: {Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:18:02.932495  680424 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 20:18:02.932558  680424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:18:02.974482  680424 cri.go:89] found id: "d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1"
	I0108 20:18:02.974500  680424 cri.go:89] found id: "4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47"
	I0108 20:18:02.974504  680424 cri.go:89] found id: "bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419"
	I0108 20:18:02.974509  680424 cri.go:89] found id: "6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84"
	I0108 20:18:02.974512  680424 cri.go:89] found id: "d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	I0108 20:18:02.974516  680424 cri.go:89] found id: "23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48"
	I0108 20:18:02.974519  680424 cri.go:89] found id: "af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861"
	I0108 20:18:02.974523  680424 cri.go:89] found id: "09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c"
	I0108 20:18:02.974528  680424 cri.go:89] found id: "0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb"
	I0108 20:18:02.974534  680424 cri.go:89] found id: ""
	I0108 20:18:02.974599  680424 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 20:18:03.008491  680424 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c","pid":1274,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c/rootfs","created":"2024-01-08T20:17:05.382496107Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5d770ae7c05d7b13bc2e5621283713ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0a6b7c1f3
96e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb","pid":1232,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb/rootfs","created":"2024-01-08T20:17:05.30318414Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","io.kubernetes.cri.sandbox-name":"etcd-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5ddae75ae78b04ccb699098c29e5635"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48","pid":1314,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48/rootfs","created":"2024-01-08T20:17:05.451653659Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"94e52f63c4e823859e27d9606ecfb426"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","pid":1685,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3f
e3c2bae7fb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb/rootfs","created":"2024-01-08T20:17:27.40401502Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_1a29cdd1-3689-4c64-b1f6-78051dd0f4cd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a29cdd1-3689-4c64-b1f6-78051dd0f4cd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","pid":1798,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032/rootfs","created":"2024-01-08T20:17:27.902074703Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-8f54v_c8ee3b5c-77b8-49cd-be3b-2fed766a1681","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-8f54v","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c8ee3b5c-77b8-49cd-be3b-2fed766a1681"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47","pid":2131,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47/rootfs","created":"2024-01-08T20:17:40.442612365Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-kfq5h","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"096d598a-b3a8-447b-89e0-f8d6788334d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eafd65391bc08e5b8239b1fb2c4477d0c91
e31c6f20071050365768b2954f84","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84/rootfs","created":"2024-01-08T20:17:27.927929232Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","io.kubernetes.cri.sandbox-name":"kube-proxy-rkdqg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b744ff0b-b217-4f49-8af0-76952412ab2b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","pid":1158,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8f6b4ff0fbedc09da086c68c0
72b77e82fd7e632269f2d98c18c810891c441/rootfs","created":"2024-01-08T20:17:05.170169098Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-819954_f878c4636850ccf2e5b70c6db6ff0087","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f878c4636850ccf2e5b70c6db6ff0087"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861","pid":1325,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861/rootfs","created":"2024-01-08T20:17:05.487200444Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f878c4636850ccf2e5b70c6db6ff0087"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","pid":1192,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2410c96230109e2e646b7748ed2c4b653317ef44fb
c5b419932c4b3cc348f68/rootfs","created":"2024-01-08T20:17:05.232105635Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-819954_94e52f63c4e823859e27d9606ecfb426","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"94e52f63c4e823859e27d9606ecfb426"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419","pid":1917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419","roo
tfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419/rootfs","created":"2024-01-08T20:17:28.197457923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","io.kubernetes.cri.sandbox-name":"kindnet-8f54v","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c8ee3b5c-77b8-49cd-be3b-2fed766a1681"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1","pid":2944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d073a74d1e9ca42b70d578b403c76101a2144bfc923bc7898522
17c5a8b9cfd1/rootfs","created":"2024-01-08T20:17:58.630550052Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a29cdd1-3689-4c64-b1f6-78051dd0f4cd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","pid":1144,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d/rootfs","created":"2024-01-08T20:17:05.143485506Z","annotations":{"io.kubernetes.cri.co
ntainer-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-819954_5d770ae7c05d7b13bc2e5621283713ab","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5d770ae7c05d7b13bc2e5621283713ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","pid":2101,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c/roo
tfs","created":"2024-01-08T20:17:40.348140186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-kfq5h_096d598a-b3a8-447b-89e0-f8d6788334d5","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-kfq5h","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"096d598a-b3a8-447b-89e0-f8d6788334d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","pid":1133,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940/rootfs","created":"2024-01-08T20:17:05.11502994Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-819954_a5ddae75ae78b04ccb699098c29e5635","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5ddae75ae78b04ccb699098c29e5635"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","pid":1805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc
004c81ec46e82d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d/rootfs","created":"2024-01-08T20:17:27.837270213Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-rkdqg_b744ff0b-b217-4f49-8af0-76952412ab2b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-rkdqg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b744ff0b-b217-4f49-8af0-76952412ab2b"},"owner":"root"}]
	I0108 20:18:03.008835  680424 cri.go:126] list returned 16 containers
	I0108 20:18:03.008844  680424 cri.go:129] container: {ID:09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c Status:running}
	I0108 20:18:03.008863  680424 cri.go:135] skipping {09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c running}: state = "running", want "paused"
	I0108 20:18:03.008872  680424 cri.go:129] container: {ID:0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb Status:running}
	I0108 20:18:03.008878  680424 cri.go:135] skipping {0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb running}: state = "running", want "paused"
	I0108 20:18:03.008888  680424 cri.go:129] container: {ID:23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 Status:running}
	I0108 20:18:03.008894  680424 cri.go:135] skipping {23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 running}: state = "running", want "paused"
	I0108 20:18:03.008899  680424 cri.go:129] container: {ID:28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb Status:running}
	I0108 20:18:03.008905  680424 cri.go:131] skipping 28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb - not in ps
	I0108 20:18:03.008909  680424 cri.go:129] container: {ID:30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032 Status:running}
	I0108 20:18:03.008915  680424 cri.go:131] skipping 30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032 - not in ps
	I0108 20:18:03.008919  680424 cri.go:129] container: {ID:4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 Status:running}
	I0108 20:18:03.008925  680424 cri.go:135] skipping {4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 running}: state = "running", want "paused"
	I0108 20:18:03.008930  680424 cri.go:129] container: {ID:6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 Status:running}
	I0108 20:18:03.008936  680424 cri.go:135] skipping {6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 running}: state = "running", want "paused"
	I0108 20:18:03.008941  680424 cri.go:129] container: {ID:8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441 Status:running}
	I0108 20:18:03.008963  680424 cri.go:131] skipping 8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441 - not in ps
	I0108 20:18:03.008968  680424 cri.go:129] container: {ID:af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 Status:running}
	I0108 20:18:03.008976  680424 cri.go:135] skipping {af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 running}: state = "running", want "paused"
	I0108 20:18:03.008984  680424 cri.go:129] container: {ID:b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68 Status:running}
	I0108 20:18:03.009020  680424 cri.go:131] skipping b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68 - not in ps
	I0108 20:18:03.009025  680424 cri.go:129] container: {ID:bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 Status:running}
	I0108 20:18:03.009031  680424 cri.go:135] skipping {bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 running}: state = "running", want "paused"
	I0108 20:18:03.009038  680424 cri.go:129] container: {ID:d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 Status:running}
	I0108 20:18:03.009044  680424 cri.go:135] skipping {d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 running}: state = "running", want "paused"
	I0108 20:18:03.009049  680424 cri.go:129] container: {ID:e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d Status:running}
	I0108 20:18:03.009054  680424 cri.go:131] skipping e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d - not in ps
	I0108 20:18:03.009059  680424 cri.go:129] container: {ID:ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c Status:running}
	I0108 20:18:03.009064  680424 cri.go:131] skipping ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c - not in ps
	I0108 20:18:03.009068  680424 cri.go:129] container: {ID:f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940 Status:running}
	I0108 20:18:03.009073  680424 cri.go:131] skipping f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940 - not in ps
	I0108 20:18:03.009078  680424 cri.go:129] container: {ID:f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d Status:running}
	I0108 20:18:03.009084  680424 cri.go:131] skipping f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d - not in ps
	I0108 20:18:03.009148  680424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:18:03.027274  680424 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 20:18:03.027286  680424 kubeadm.go:636] restartCluster start
	I0108 20:18:03.027343  680424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 20:18:03.038603  680424 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:03.039158  680424 kubeconfig.go:92] found "functional-819954" server: "https://192.168.49.2:8441"
	I0108 20:18:03.040709  680424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 20:18:03.052615  680424 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-01-08 20:16:57.502159996 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-01-08 20:18:02.365661107 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0108 20:18:03.052624  680424 kubeadm.go:1135] stopping kube-system containers ...
	I0108 20:18:03.052641  680424 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 20:18:03.052710  680424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:18:03.103938  680424 cri.go:89] found id: "d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1"
	I0108 20:18:03.103951  680424 cri.go:89] found id: "4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47"
	I0108 20:18:03.103970  680424 cri.go:89] found id: "bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419"
	I0108 20:18:03.103974  680424 cri.go:89] found id: "6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84"
	I0108 20:18:03.103977  680424 cri.go:89] found id: "d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	I0108 20:18:03.103981  680424 cri.go:89] found id: "23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48"
	I0108 20:18:03.103984  680424 cri.go:89] found id: "af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861"
	I0108 20:18:03.103987  680424 cri.go:89] found id: "09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c"
	I0108 20:18:03.103991  680424 cri.go:89] found id: "0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb"
	I0108 20:18:03.103996  680424 cri.go:89] found id: ""
	I0108 20:18:03.104001  680424 cri.go:234] Stopping containers: [d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb]
	I0108 20:18:03.104060  680424 ssh_runner.go:195] Run: which crictl
	I0108 20:18:03.109069  680424 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb
	I0108 20:18:08.374347  680424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb: (5.265240873s)
	W0108 20:18:08.374414  680424 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb: Process exited with status 1
	stdout:
	d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1
	4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47
	bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419
	6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84
	
	stderr:
	E0108 20:18:08.370998    3382 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": not found" containerID="d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	time="2024-01-08T20:18:08Z" level=fatal msg="stopping the container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": not found"
	I0108 20:18:08.374497  680424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 20:18:08.457602  680424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:18:08.469679  680424 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 20:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 20:17 /etc/kubernetes/scheduler.conf
	
	I0108 20:18:08.469753  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0108 20:18:08.481377  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0108 20:18:08.492216  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0108 20:18:08.503457  680424 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:08.503514  680424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 20:18:08.513986  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0108 20:18:08.524732  680424 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:08.524786  680424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 20:18:08.535465  680424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:18:08.546706  680424 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 20:18:08.546723  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:08.609948  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:10.789213  680424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.179241477s)
	I0108 20:18:10.789231  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:10.997287  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.088107  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.185343  680424 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:18:11.185410  680424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:18:11.203104  680424 api_server.go:72] duration metric: took 17.758159ms to wait for apiserver process to appear ...
	I0108 20:18:11.203118  680424 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:18:11.203149  680424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0108 20:18:11.213592  680424 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0108 20:18:11.229628  680424 api_server.go:141] control plane version: v1.28.4
	I0108 20:18:11.229645  680424 api_server.go:131] duration metric: took 26.52153ms to wait for apiserver health ...
	I0108 20:18:11.229653  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:18:11.229659  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:18:11.231542  680424 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:18:11.233706  680424 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:18:11.238795  680424 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:18:11.238807  680424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:18:11.272031  680424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:18:11.665927  680424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:18:11.674252  680424 system_pods.go:59] 8 kube-system pods found
	I0108 20:18:11.674268  680424 system_pods.go:61] "coredns-5dd5756b68-kfq5h" [096d598a-b3a8-447b-89e0-f8d6788334d5] Running
	I0108 20:18:11.674272  680424 system_pods.go:61] "etcd-functional-819954" [7cfcab5f-d05b-43ce-aaf4-936977eda08c] Running
	I0108 20:18:11.674276  680424 system_pods.go:61] "kindnet-8f54v" [c8ee3b5c-77b8-49cd-be3b-2fed766a1681] Running
	I0108 20:18:11.674281  680424 system_pods.go:61] "kube-apiserver-functional-819954" [7c24bde5-5c62-443f-95c3-d23a713d71bd] Running
	I0108 20:18:11.674290  680424 system_pods.go:61] "kube-controller-manager-functional-819954" [c56d710c-a540-427b-9b64-031140796e4f] Running
	I0108 20:18:11.674294  680424 system_pods.go:61] "kube-proxy-rkdqg" [b744ff0b-b217-4f49-8af0-76952412ab2b] Running
	I0108 20:18:11.674299  680424 system_pods.go:61] "kube-scheduler-functional-819954" [c10ec52c-2a67-4ed9-80bc-7c592b59b99c] Running
	I0108 20:18:11.674306  680424 system_pods.go:61] "storage-provisioner" [1a29cdd1-3689-4c64-b1f6-78051dd0f4cd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 20:18:11.674312  680424 system_pods.go:74] duration metric: took 8.374624ms to wait for pod list to return data ...
	I0108 20:18:11.674320  680424 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:18:11.677575  680424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:18:11.677594  680424 node_conditions.go:123] node cpu capacity is 2
	I0108 20:18:11.677603  680424 node_conditions.go:105] duration metric: took 3.279392ms to run NodePressure ...
	I0108 20:18:11.677625  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.896459  680424 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 20:18:11.904865  680424 retry.go:31] will retry after 336.700738ms: kubelet not initialised
	I0108 20:18:12.249628  680424 kubeadm.go:787] kubelet initialised
	I0108 20:18:12.249640  680424 kubeadm.go:788] duration metric: took 353.167628ms waiting for restarted kubelet to initialise ...
	I0108 20:18:12.249656  680424 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:18:12.278777  680424 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.282756  680424 pod_ready.go:97] error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	I0108 20:18:14.282790  680424 pod_ready.go:81] duration metric: took 2.003985471s waiting for pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.282801  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	I0108 20:18:14.282822  680424 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283162  680424 pod_ready.go:97] error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283173  680424 pod_ready.go:81] duration metric: took 343.571µs waiting for pod "etcd-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283181  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283202  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283501  680424 pod_ready.go:97] error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283511  680424 pod_ready.go:81] duration metric: took 303.547µs waiting for pod "kube-apiserver-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283521  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283546  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283950  680424 pod_ready.go:97] error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283968  680424 pod_ready.go:81] duration metric: took 407.242µs waiting for pod "kube-controller-manager-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283981  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283996  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rkdqg" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.284223  680424 pod_ready.go:97] error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284231  680424 pod_ready.go:81] duration metric: took 230.332µs waiting for pod "kube-proxy-rkdqg" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.284238  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284251  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.284467  680424 pod_ready.go:97] error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284474  680424 pod_ready.go:81] duration metric: took 217.541µs waiting for pod "kube-scheduler-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.284480  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284496  680424 pod_ready.go:38] duration metric: took 2.03483066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:18:14.284510  680424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0108 20:18:14.293627  680424 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0108 20:18:14.293642  680424 kubeadm.go:640] restartCluster took 11.266351717s
	I0108 20:18:14.293648  680424 kubeadm.go:406] StartCluster complete in 11.361257745s
	I0108 20:18:14.293671  680424 settings.go:142] acquiring lock: {Name:mkb63cd96d7a856f465b0592d8a592dc849b8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:14.293726  680424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:18:14.294383  680424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/kubeconfig: {Name:mk40e5900c8ed31a9e7a0515010236c17752c8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:14.295469  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:18:14.295728  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:14.295766  680424 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:18:14.295825  680424 addons.go:69] Setting storage-provisioner=true in profile "functional-819954"
	I0108 20:18:14.295836  680424 addons.go:237] Setting addon storage-provisioner=true in "functional-819954"
	W0108 20:18:14.295841  680424 addons.go:246] addon storage-provisioner should already be in state true
	I0108 20:18:14.295896  680424 host.go:66] Checking if "functional-819954" exists ...
	I0108 20:18:14.296279  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	W0108 20:18:14.296510  680424 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-819954" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.296521  680424 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.296548  680424 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:18:14.301958  680424 out.go:177] * Verifying Kubernetes components...
	I0108 20:18:14.296882  680424 addons.go:69] Setting default-storageclass=true in profile "functional-819954"
	I0108 20:18:14.303886  680424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-819954"
	I0108 20:18:14.303956  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:18:14.304342  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:14.328200  680424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:18:14.330160  680424 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:18:14.330172  680424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:18:14.330234  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:14.350216  680424 addons.go:237] Setting addon default-storageclass=true in "functional-819954"
	W0108 20:18:14.350227  680424 addons.go:246] addon default-storageclass should already be in state true
	I0108 20:18:14.350248  680424 host.go:66] Checking if "functional-819954" exists ...
	I0108 20:18:14.350700  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:14.387943  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:14.403990  680424 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:18:14.404002  680424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:18:14.404061  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:14.425786  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	E0108 20:18:14.436656  680424 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0108 20:18:14.436680  680424 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0108 20:18:14.436696  680424 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0108 20:18:14.436847  680424 node_ready.go:35] waiting up to 6m0s for node "functional-819954" to be "Ready" ...
	I0108 20:18:14.437355  680424 node_ready.go:53] error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.437365  680424 node_ready.go:38] duration metric: took 508.591µs waiting for node "functional-819954" to be "Ready" ...
	I0108 20:18:14.440079  680424 out.go:177] 
	W0108 20:18:14.441826  680424 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.441847  680424 out.go:239] * 
	W0108 20:18:14.443217  680424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:18:14.445094  680424 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5db7d28fe46b8       04b4eaa3d3db8       5 seconds ago        Running             kindnet-cni               1                   30df59d2570db       kindnet-8f54v
	21ef5a21b398d       3ca3ca488cf13       5 seconds ago        Running             kube-proxy                1                   f9bda501df60f       kube-proxy-rkdqg
	72f1ea2bc29cc       ba04bb24b9575       5 seconds ago        Running             storage-provisioner       2                   28f17cb160b77       storage-provisioner
	158dd083763bc       97e04611ad434       5 seconds ago        Running             coredns                   1                   ef8377f1664c9       coredns-5dd5756b68-kfq5h
	1a740457ab4ba       04b4c447bb9d4       5 seconds ago        Exited              kube-apiserver            1                   908b9cbce2bdf       kube-apiserver-functional-819954
	d073a74d1e9ca       ba04bb24b9575       19 seconds ago       Exited              storage-provisioner       1                   28f17cb160b77       storage-provisioner
	4d6d515dfb3db       97e04611ad434       37 seconds ago       Exited              coredns                   0                   ef8377f1664c9       coredns-5dd5756b68-kfq5h
	bdd9436308352       04b4eaa3d3db8       49 seconds ago       Exited              kindnet-cni               0                   30df59d2570db       kindnet-8f54v
	6eafd65391bc0       3ca3ca488cf13       50 seconds ago       Exited              kube-proxy                0                   f9bda501df60f       kube-proxy-rkdqg
	23cacc81bdb79       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   b2410c9623010       kube-controller-manager-functional-819954
	af97d71bc53af       05c284c929889       About a minute ago   Running             kube-scheduler            0                   8d8f6b4ff0fbe       kube-scheduler-functional-819954
	0a6b7c1f396e6       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   f16a2df5f2aeb       etcd-functional-819954
	
	
	==> containerd <==
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752518809Z" level=info msg="shim disconnected" id=1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752787640Z" level=warning msg="cleaning up after shim disconnected" id=1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550 namespace=k8s.io
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752905998Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.776524519Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3918 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.796934046Z" level=info msg="StartContainer for \"21ef5a21b398de4c92f7c23b2a7438f018e4b9cb6dd4003c3394b3afab228ab7\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.241548449Z" level=info msg="StopContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" with timeout 2 (s)"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.242121671Z" level=info msg="Stop container \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" with signal terminated"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272676303Z" level=info msg="shim disconnected" id=e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272871986Z" level=warning msg="cleaning up after shim disconnected" id=e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d namespace=k8s.io
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272892966Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.294244263Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321472020Z" level=info msg="shim disconnected" id=09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321805301Z" level=warning msg="cleaning up after shim disconnected" id=09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c namespace=k8s.io
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321837958Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.333361005Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4114 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.336100709Z" level=info msg="StopContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.337272327Z" level=info msg="StopPodSandbox for \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.337466344Z" level=info msg="Container to stop \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.341124236Z" level=info msg="TearDown network for sandbox \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\" successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.341240715Z" level=info msg="StopPodSandbox for \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.353288312Z" level=info msg="RemoveContainer for \"c0ab552c13365935acc32ea8138dac9c7273050ed9227c0a302aa3567b3c95af\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.360653188Z" level=info msg="RemoveContainer for \"c0ab552c13365935acc32ea8138dac9c7273050ed9227c0a302aa3567b3c95af\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.362379066Z" level=info msg="RemoveContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.367066546Z" level=info msg="RemoveContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.367952997Z" level=error msg="ContainerStatus for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\": not found"
	
	
	==> coredns [158dd083763bcd5814de3a45796be388b9d8354125ae14dd81467887a246f40b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34838 - 17254 "HINFO IN 8829591644871294786.2221726687050420815. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015153903s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36655 - 51930 "HINFO IN 1864645754049744283.7379797473336989881. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012250114s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000867] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001057] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001228] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +0.003300] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=000000007aebacca
	[  +0.001230] FS-Cache: O-key=[8] 'e63a5c0100000000'
	[  +0.000802] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000001ba12843
	[  +0.001177] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +2.625464] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001086] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=0000000019c2b840
	[  +0.001196] FS-Cache: O-key=[8] 'e53a5c0100000000'
	[  +0.000801] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001152] FS-Cache: N-key=[8] 'e53a5c0100000000'
	[  +0.329983] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001107] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000aa56b1d3
	[  +0.001269] FS-Cache: O-key=[8] 'ee3a5c0100000000'
	[  +0.000826] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000003b0b7e1f
	[  +0.001169] FS-Cache: N-key=[8] 'ee3a5c0100000000'
	[Jan 8 19:41] hrtimer: interrupt took 4780855 ns
	
	
	==> etcd [0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb] <==
	{"level":"info","ts":"2024-01-08T20:17:05.460103Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-08T20:17:05.460631Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470221Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470294Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470305Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-08T20:17:05.470715Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-08T20:17:05.617396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.618496Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.629366Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-819954 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:17:05.633162Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633264Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633299Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:17:05.633541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:17:05.63369Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:17:05.633801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:17:05.634334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:17:05.641975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 20:18:18 up  3:00,  0 users,  load average: 1.02, 1.15, 1.41
	Linux functional-819954 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [5db7d28fe46b8deef1dfbdb0cafda5dacd79814ee5e68f8967e2918879074683] <==
	I0108 20:18:12.806820       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:18:12.807086       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:18:12.807315       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:18:12.897158       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:18:12.897363       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:18:13.210331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:18:13.210371       1 main.go:227] handling current node
	
	
	==> kindnet [bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419] <==
	I0108 20:17:28.306230       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:17:28.306523       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:17:28.306745       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:17:28.306840       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:17:28.306960       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:17:28.797755       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:28.797794       1 main.go:227] handling current node
	I0108 20:17:38.804892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:38.804922       1 main.go:227] handling current node
	I0108 20:17:48.815546       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:48.815578       1 main.go:227] handling current node
	I0108 20:17:58.826405       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:58.826506       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550] <==
	I0108 20:18:12.628602       1 options.go:220] external host was not specified, using 192.168.49.2
	I0108 20:18:12.630349       1 server.go:148] Version: v1.28.4
	I0108 20:18:12.630375       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0108 20:18:12.630625       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48] <==
	I0108 20:17:25.473771       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0108 20:17:25.490750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.064392ms"
	I0108 20:17:25.493959       1 shared_informer.go:318] Caches are synced for daemon sets
	I0108 20:17:25.517482       1 shared_informer.go:318] Caches are synced for attach detach
	I0108 20:17:25.553343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.538152ms"
	I0108 20:17:25.593727       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8f54v"
	I0108 20:17:25.604908       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rkdqg"
	I0108 20:17:25.639245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.807962ms"
	I0108 20:17:25.640124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="275.337µs"
	I0108 20:17:25.805998       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0108 20:17:25.879048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cgl4x"
	I0108 20:17:25.901197       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 20:17:25.901352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.150922ms"
	I0108 20:17:25.913471       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 20:17:25.913502       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0108 20:17:25.922245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.844787ms"
	I0108 20:17:25.922325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.706µs"
	I0108 20:17:26.994510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.738µs"
	I0108 20:17:27.013443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.764µs"
	I0108 20:17:40.517212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.753µs"
	I0108 20:17:41.531172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.361473ms"
	I0108 20:17:41.531382       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0108 20:17:41.532615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.361µs"
	I0108 20:18:12.301322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.88429ms"
	I0108 20:18:12.301498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.235µs"
	
	
	==> kube-proxy [21ef5a21b398de4c92f7c23b2a7438f018e4b9cb6dd4003c3394b3afab228ab7] <==
	I0108 20:18:12.904468       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:18:12.906948       1 config.go:188] "Starting service config controller"
	I0108 20:18:12.911123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:18:12.907113       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:18:12.911167       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:18:12.907811       1 config.go:315] "Starting node config controller"
	I0108 20:18:12.911184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:18:13.011357       1 shared_informer.go:318] Caches are synced for node config
	I0108 20:18:13.011399       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:18:13.011426       1 shared_informer.go:318] Caches are synced for endpoint slice config
	W0108 20:18:13.287280       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:13.287503       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:13.287654       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:14.138741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.138805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.184373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.184435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.720926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.720975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:16.466907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:16.466950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:16.867603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:16.867651       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:17.516931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:17.516980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84] <==
	I0108 20:17:27.988534       1 server_others.go:69] "Using iptables proxy"
	I0108 20:17:28.010073       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 20:17:28.059460       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:17:28.071657       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:17:28.071704       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:17:28.071712       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:17:28.071795       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:17:28.072079       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:17:28.072090       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:17:28.073821       1 config.go:188] "Starting service config controller"
	I0108 20:17:28.073870       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:17:28.073917       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:17:28.073922       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:17:28.075775       1 config.go:315] "Starting node config controller"
	I0108 20:17:28.075791       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:17:28.174021       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:17:28.174074       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:17:28.176053       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861] <==
	W0108 20:17:09.284956       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:17:09.284982       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:17:09.285114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:17:09.285141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:17:09.285321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:17:09.285344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:17:09.285465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:17:09.285531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:17:09.285618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:17:09.285674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:17:09.285735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:17:09.285755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 20:17:10.107717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:17:10.108063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 20:17:10.129275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.129536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:17:10.203396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.203638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:17:10.220429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:17:10.220655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:17:10.354507       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:17:10.354736       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:17:10.394755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.394807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0108 20:17:12.772331       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371472    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371680    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.371853    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.372049    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.372228    3571 status_manager.go:853] "Failed to get status for pod" podUID="c8ee3b5c-77b8-49cd-be3b-2fed766a1681" pod="kube-system/kindnet-8f54v" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-8f54v\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:15 functional-819954 kubelet[3571]: I0108 20:18:15.372416    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.226445    3571 status_manager.go:853] "Failed to get status for pod" podUID="27b4a77c3ebefa78b9f28bd7e336085d" pod="kube-system/kube-apiserver-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.226800    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.226993    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227153    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227308    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227469    3571 status_manager.go:853] "Failed to get status for pod" podUID="c8ee3b5c-77b8-49cd-be3b-2fed766a1681" pod="kube-system/kindnet-8f54v" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-8f54v\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227638    3571 status_manager.go:853] "Failed to get status for pod" podUID="f878c4636850ccf2e5b70c6db6ff0087" pod="kube-system/kube-scheduler-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.227817    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.372289    3571 scope.go:117] "RemoveContainer" containerID="1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: E0108 20:18:16.372860    3571 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-819954_kube-system(27b4a77c3ebefa78b9f28bd7e336085d)\"" pod="kube-system/kube-apiserver-functional-819954" podUID="27b4a77c3ebefa78b9f28bd7e336085d"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.373055    3571 status_manager.go:853] "Failed to get status for pod" podUID="27b4a77c3ebefa78b9f28bd7e336085d" pod="kube-system/kube-apiserver-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.373300    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.373496    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.373670    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.373821    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.373996    3571 status_manager.go:853] "Failed to get status for pod" podUID="c8ee3b5c-77b8-49cd-be3b-2fed766a1681" pod="kube-system/kindnet-8f54v" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-8f54v\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.374153    3571 status_manager.go:853] "Failed to get status for pod" podUID="f878c4636850ccf2e5b70c6db6ff0087" pod="kube-system/kube-scheduler-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:16 functional-819954 kubelet[3571]: I0108 20:18:16.374313    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:17 functional-819954 kubelet[3571]: E0108 20:18:17.321477    3571 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-819954.17a878a15277a0f4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-819954", UID:"5d770ae7c05d7b13bc2e5621283713ab", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Stopping container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Hos
t:"functional-819954"}, FirstTimestamp:time.Date(2024, time.January, 8, 20, 18, 13, 228372212, time.Local), LastTimestamp:time.Date(2024, time.January, 8, 20, 18, 13, 228372212, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-819954"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	
	
	==> storage-provisioner [72f1ea2bc29cc4386d42b1b63cf02ae0fe73685627fd8d9cd52eea91edf7d50c] <==
	I0108 20:18:12.751809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:18:12.769640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:18:12.769735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0108 20:18:16.226427       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1] <==
	I0108 20:17:58.677934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:17:58.713738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:17:58.714002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:17:58.723448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:17:58.725753       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9!
	I0108 20:17:58.727292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca438388-635f-4015-b991-0cb05b966748", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9 became leader
	I0108 20:17:58.826378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:18:18.274122  682197 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-819954 -n functional-819954
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-819954 -n functional-819954: exit status 2 (362.871865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-819954" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (2.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 logs --file /tmp/TestFunctionalserialLogsFileCmd4022646000/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 logs --file /tmp/TestFunctionalserialLogsFileCmd4022646000/001/logs.txt: (1.65732835s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:18:21.932770  682676 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-819954 apply -f testdata/invalidsvc.yaml
functional_test.go:2320: (dbg) Non-zero exit: kubectl --context functional-819954 apply -f testdata/invalidsvc.yaml: exit status 1 (65.204332ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2322: kubectl --context functional-819954 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-819954 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-819954 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (68.81402ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-819954 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-819954
helpers_test.go:235: (dbg) docker inspect functional-819954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f",
	        "Created": "2024-01-08T20:16:51.434229781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 676751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:16:51.739441604Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/hosts",
	        "LogPath": "/var/lib/docker/containers/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f/5206d6c086ded7b9a6253e2ee8fab287613da6349483d559a3d691ac65b72b4f-json.log",
	        "Name": "/functional-819954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-819954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-819954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49-init/diff:/var/lib/docker/overlay2/5440a5a336c464ed564efc18a632104b770481b7cc483f7cadb6269a7b019538/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7c709c071d4a87efa86bd0253464a605d9d18a0337d0dc95b6478c47d6ced49/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-819954",
	                "Source": "/var/lib/docker/volumes/functional-819954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-819954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-819954",
	                "name.minikube.sigs.k8s.io": "functional-819954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1aba25e72c9605376ff8b36e23f3db3d6e51d2cc787f128481560252c5247b9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1aba25e72c9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-819954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5206d6c086de",
	                        "functional-819954"
	                    ],
	                    "NetworkID": "fcf0c895894e67ac49df7e64ee5509b677fcbc3ba93183dd55f88ceb52f4a2e1",
	                    "EndpointID": "e8f7119525a658341fb5089e073b105a4cf2d1cbe7becfb0bb15fe47547f2f19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-819954 -n functional-819954
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-819954 -n functional-819954: exit status 2 (431.801073ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 logs -n 25: (2.160396328s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	| ssh     | functional-819954 ssh sudo                                               | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-819954                                                        | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh                                                    | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-819954 cache reload                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	| ssh     | functional-819954 ssh                                                    | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-819954 kubectl --                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | --context functional-819954                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-819954                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| config  | functional-819954 config unset                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| cp      | functional-819954 cp                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-819954 config get                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-819954 config set                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| config  | functional-819954 config get                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh -n                                                 | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | functional-819954 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-819954 config unset                                           | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-819954 config get                                             | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| license |                                                                          | minikube          | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	| cp      | functional-819954 cp                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | functional-819954:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd1918641299/001/cp-test.txt               |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh sudo                                               | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh -n                                                 | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | functional-819954 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh sudo                                               | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | systemctl is-active crio                                                 |                   |         |         |                     |                     |
	| cp      | functional-819954 cp                                                     | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh     | functional-819954 ssh -n                                                 | functional-819954 | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | functional-819954 sudo cat                                               |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:17:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:17:58.464235  680424 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:17:58.464403  680424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:17:58.464406  680424 out.go:309] Setting ErrFile to fd 2...
	I0108 20:17:58.464411  680424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:17:58.464683  680424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:17:58.465115  680424 out.go:303] Setting JSON to false
	I0108 20:17:58.466149  680424 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10819,"bootTime":1704734260,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:17:58.466231  680424 start.go:138] virtualization:  
	I0108 20:17:58.469193  680424 out.go:177] * [functional-819954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:17:58.471386  680424 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:17:58.471535  680424 notify.go:220] Checking for updates...
	I0108 20:17:58.475541  680424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:17:58.477975  680424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:17:58.480192  680424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:17:58.482293  680424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:17:58.484206  680424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:17:58.487083  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:17:58.487175  680424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:17:58.513635  680424 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:17:58.513753  680424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:17:58.641973  680424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:17:58.630894254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:17:58.642062  680424 docker.go:295] overlay module found
	I0108 20:17:58.646069  680424 out.go:177] * Using the docker driver based on existing profile
	I0108 20:17:58.647956  680424 start.go:298] selected driver: docker
	I0108 20:17:58.647974  680424 start.go:902] validating driver "docker" against &{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:17:58.648087  680424 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:17:58.648190  680424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:17:58.764235  680424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:17:58.753935434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:17:58.764631  680424 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:17:58.764674  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:17:58.764699  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:17:58.764709  680424 start_flags.go:323] config:
	{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:17:58.767714  680424 out.go:177] * Starting control plane node functional-819954 in cluster functional-819954
	I0108 20:17:58.769710  680424 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:17:58.771802  680424 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:17:58.773761  680424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:17:58.773815  680424 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0108 20:17:58.773822  680424 cache.go:56] Caching tarball of preloaded images
	I0108 20:17:58.773853  680424 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:17:58.773906  680424 preload.go:174] Found /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0108 20:17:58.773915  680424 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0108 20:17:58.774026  680424 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/config.json ...
	I0108 20:17:58.791691  680424 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:17:58.791705  680424 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:17:58.791724  680424 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:17:58.791773  680424 start.go:365] acquiring machines lock for functional-819954: {Name:mk392846689e434ec56ab3789693926c63d9539d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:17:58.791839  680424 start.go:369] acquired machines lock for "functional-819954" in 43.249µs
	I0108 20:17:58.791858  680424 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:17:58.791863  680424 fix.go:54] fixHost starting: 
	I0108 20:17:58.792216  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:17:58.810342  680424 fix.go:102] recreateIfNeeded on functional-819954: state=Running err=<nil>
	W0108 20:17:58.810361  680424 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:17:58.812153  680424 out.go:177] * Updating the running docker "functional-819954" container ...
	I0108 20:17:58.814044  680424 machine.go:88] provisioning docker machine ...
	I0108 20:17:58.814063  680424 ubuntu.go:169] provisioning hostname "functional-819954"
	I0108 20:17:58.814146  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:58.835933  680424 main.go:141] libmachine: Using SSH client type: native
	I0108 20:17:58.836353  680424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I0108 20:17:58.836367  680424 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-819954 && echo "functional-819954" | sudo tee /etc/hostname
	I0108 20:17:58.992081  680424 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-819954
	
	I0108 20:17:58.992153  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:59.013658  680424 main.go:141] libmachine: Using SSH client type: native
	I0108 20:17:59.014098  680424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I0108 20:17:59.014115  680424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-819954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-819954/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-819954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:17:59.154381  680424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:17:59.154397  680424 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-649468/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-649468/.minikube}
	I0108 20:17:59.154414  680424 ubuntu.go:177] setting up certificates
	I0108 20:17:59.154438  680424 provision.go:83] configureAuth start
	I0108 20:17:59.154511  680424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-819954
	I0108 20:17:59.173236  680424 provision.go:138] copyHostCerts
	I0108 20:17:59.173351  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem, removing ...
	I0108 20:17:59.173360  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem
	I0108 20:17:59.173435  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem (1679 bytes)
	I0108 20:17:59.173535  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem, removing ...
	I0108 20:17:59.173539  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem
	I0108 20:17:59.173564  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem (1078 bytes)
	I0108 20:17:59.173613  680424 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem, removing ...
	I0108 20:17:59.173617  680424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem
	I0108 20:17:59.173639  680424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem (1123 bytes)
	I0108 20:17:59.173679  680424 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem org=jenkins.functional-819954 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-819954]
	I0108 20:17:59.825031  680424 provision.go:172] copyRemoteCerts
	I0108 20:17:59.825111  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:17:59.825149  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:17:59.843393  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:17:59.944001  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:17:59.972873  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 20:18:00.003896  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:18:00.071330  680424 provision.go:86] duration metric: configureAuth took 916.876467ms
	I0108 20:18:00.071352  680424 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:18:00.071582  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:00.071587  680424 machine.go:91] provisioned docker machine in 1.257535756s
	I0108 20:18:00.071594  680424 start.go:300] post-start starting for "functional-819954" (driver="docker")
	I0108 20:18:00.071604  680424 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:18:00.071658  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:18:00.071708  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.111558  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.284092  680424 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:18:00.292369  680424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:18:00.292396  680424 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:18:00.292408  680424 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:18:00.292415  680424 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:18:00.292425  680424 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/addons for local assets ...
	I0108 20:18:00.292501  680424 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/files for local assets ...
	I0108 20:18:00.292588  680424 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem -> 6548052.pem in /etc/ssl/certs
	I0108 20:18:00.292692  680424 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts -> hosts in /etc/test/nested/copy/654805
	I0108 20:18:00.292748  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/654805
	I0108 20:18:00.336333  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:18:00.375975  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts --> /etc/test/nested/copy/654805/hosts (40 bytes)
	I0108 20:18:00.413765  680424 start.go:303] post-start completed in 342.154302ms
	I0108 20:18:00.413846  680424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:18:00.413897  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.435615  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.535985  680424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:18:00.542312  680424 fix.go:56] fixHost completed within 1.750442372s
	I0108 20:18:00.542327  680424 start.go:83] releasing machines lock for "functional-819954", held for 1.750481371s
	I0108 20:18:00.542412  680424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-819954
	I0108 20:18:00.560939  680424 ssh_runner.go:195] Run: cat /version.json
	I0108 20:18:00.561082  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.561208  680424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:18:00.561267  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:00.580669  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.606110  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:00.817109  680424 ssh_runner.go:195] Run: systemctl --version
	I0108 20:18:00.822726  680424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:18:00.828448  680424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 20:18:00.851444  680424 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:18:00.851532  680424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:18:00.862902  680424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 20:18:00.862920  680424 start.go:475] detecting cgroup driver to use...
	I0108 20:18:00.862954  680424 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:18:00.863015  680424 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:18:00.879222  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:18:00.893583  680424 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:18:00.893651  680424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:18:00.909698  680424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:18:00.923688  680424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:18:01.061135  680424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:18:01.192875  680424 docker.go:233] disabling docker service ...
	I0108 20:18:01.192936  680424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:18:01.209575  680424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:18:01.224208  680424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:18:01.359344  680424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:18:01.483635  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:18:01.500466  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:18:01.522159  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 20:18:01.534479  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 20:18:01.547530  680424 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 20:18:01.547594  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 20:18:01.561612  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:18:01.574956  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 20:18:01.587922  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:18:01.600693  680424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:18:01.612933  680424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 20:18:01.625631  680424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:18:01.636493  680424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:18:01.647075  680424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:18:01.770815  680424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:18:01.989085  680424 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:18:01.989165  680424 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:18:01.994608  680424 start.go:543] Will wait 60s for crictl version
	I0108 20:18:01.994663  680424 ssh_runner.go:195] Run: which crictl
	I0108 20:18:01.999881  680424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:18:02.046798  680424 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0108 20:18:02.046868  680424 ssh_runner.go:195] Run: containerd --version
	I0108 20:18:02.077472  680424 ssh_runner.go:195] Run: containerd --version
	I0108 20:18:02.113852  680424 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0108 20:18:02.116279  680424 cli_runner.go:164] Run: docker network inspect functional-819954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:18:02.140354  680424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:18:02.148216  680424 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0108 20:18:02.150174  680424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:18:02.150272  680424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:18:02.191069  680424 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:18:02.191081  680424 containerd.go:518] Images already preloaded, skipping extraction
	I0108 20:18:02.191137  680424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:18:02.234636  680424 containerd.go:604] all images are preloaded for containerd runtime.
	I0108 20:18:02.234648  680424 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:18:02.234708  680424 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:18:02.279978  680424 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0108 20:18:02.280004  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:18:02.280014  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:18:02.280023  680424 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:18:02.280044  680424 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-819954 NodeName:functional-819954 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:18:02.280201  680424 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-819954"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:18:02.280282  680424 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-819954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0108 20:18:02.280354  680424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:18:02.293102  680424 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:18:02.293177  680424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:18:02.307327  680424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0108 20:18:02.330218  680424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:18:02.352572  680424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0108 20:18:02.375049  680424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:18:02.379907  680424 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954 for IP: 192.168.49.2
	I0108 20:18:02.379939  680424 certs.go:190] acquiring lock for shared ca certs: {Name:mk8baa4ad3918f12788abe17f587583afd1a9c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:02.380074  680424 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key
	I0108 20:18:02.380109  680424 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key
	I0108 20:18:02.380182  680424 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.key
	I0108 20:18:02.380233  680424 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.key.dd3b5fb2
	I0108 20:18:02.380269  680424 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.key
	I0108 20:18:02.380373  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem (1338 bytes)
	W0108 20:18:02.380399  680424 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805_empty.pem, impossibly tiny 0 bytes
	I0108 20:18:02.380407  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:18:02.380433  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:18:02.380462  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:18:02.380485  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem (1679 bytes)
	I0108 20:18:02.380528  680424 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:18:02.381312  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:18:02.411349  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 20:18:02.442268  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:18:02.473820  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:18:02.504402  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:18:02.536771  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:18:02.570637  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:18:02.602528  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 20:18:02.632604  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /usr/share/ca-certificates/6548052.pem (1708 bytes)
	I0108 20:18:02.662769  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:18:02.694837  680424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem --> /usr/share/ca-certificates/654805.pem (1338 bytes)
	I0108 20:18:02.724784  680424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:18:02.755771  680424 ssh_runner.go:195] Run: openssl version
	I0108 20:18:02.763751  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6548052.pem && ln -fs /usr/share/ca-certificates/6548052.pem /etc/ssl/certs/6548052.pem"
	I0108 20:18:02.776125  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.781285  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.781351  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6548052.pem
	I0108 20:18:02.790188  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6548052.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:18:02.801640  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:18:02.813740  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.818909  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.818963  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:18:02.827588  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:18:02.838775  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/654805.pem && ln -fs /usr/share/ca-certificates/654805.pem /etc/ssl/certs/654805.pem"
	I0108 20:18:02.850919  680424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.855653  680424 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.855707  680424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/654805.pem
	I0108 20:18:02.864321  680424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/654805.pem /etc/ssl/certs/51391683.0"
	I0108 20:18:02.875750  680424 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:18:02.880517  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 20:18:02.889564  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 20:18:02.897996  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 20:18:02.906530  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 20:18:02.915138  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 20:18:02.923827  680424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 20:18:02.932403  680424 kubeadm.go:404] StartCluster: {Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:18:02.932495  680424 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 20:18:02.932558  680424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:18:02.974482  680424 cri.go:89] found id: "d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1"
	I0108 20:18:02.974500  680424 cri.go:89] found id: "4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47"
	I0108 20:18:02.974504  680424 cri.go:89] found id: "bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419"
	I0108 20:18:02.974509  680424 cri.go:89] found id: "6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84"
	I0108 20:18:02.974512  680424 cri.go:89] found id: "d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	I0108 20:18:02.974516  680424 cri.go:89] found id: "23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48"
	I0108 20:18:02.974519  680424 cri.go:89] found id: "af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861"
	I0108 20:18:02.974523  680424 cri.go:89] found id: "09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c"
	I0108 20:18:02.974528  680424 cri.go:89] found id: "0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb"
	I0108 20:18:02.974534  680424 cri.go:89] found id: ""
	I0108 20:18:02.974599  680424 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 20:18:03.008491  680424 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c","pid":1274,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c/rootfs","created":"2024-01-08T20:17:05.382496107Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5d770ae7c05d7b13bc2e5621283713ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0a6b7c1f3
96e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb","pid":1232,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb/rootfs","created":"2024-01-08T20:17:05.30318414Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","io.kubernetes.cri.sandbox-name":"etcd-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5ddae75ae78b04ccb699098c29e5635"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48","pid":1314,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48/rootfs","created":"2024-01-08T20:17:05.451653659Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"94e52f63c4e823859e27d9606ecfb426"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","pid":1685,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3f
e3c2bae7fb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb/rootfs","created":"2024-01-08T20:17:27.40401502Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_1a29cdd1-3689-4c64-b1f6-78051dd0f4cd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a29cdd1-3689-4c64-b1f6-78051dd0f4cd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","pid":1798,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032/rootfs","created":"2024-01-08T20:17:27.902074703Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-8f54v_c8ee3b5c-77b8-49cd-be3b-2fed766a1681","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-8f54v","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c8ee3b5c-77b8-49cd-be3b-2fed766a1681"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47","pid":2131,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47/rootfs","created":"2024-01-08T20:17:40.442612365Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-kfq5h","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"096d598a-b3a8-447b-89e0-f8d6788334d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eafd65391bc08e5b8239b1fb2c4477d0c91
e31c6f20071050365768b2954f84","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84/rootfs","created":"2024-01-08T20:17:27.927929232Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","io.kubernetes.cri.sandbox-name":"kube-proxy-rkdqg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b744ff0b-b217-4f49-8af0-76952412ab2b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","pid":1158,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8f6b4ff0fbedc09da086c68c0
72b77e82fd7e632269f2d98c18c810891c441/rootfs","created":"2024-01-08T20:17:05.170169098Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-819954_f878c4636850ccf2e5b70c6db6ff0087","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f878c4636850ccf2e5b70c6db6ff0087"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861","pid":1325,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861/rootfs","created":"2024-01-08T20:17:05.487200444Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f878c4636850ccf2e5b70c6db6ff0087"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","pid":1192,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2410c96230109e2e646b7748ed2c4b653317ef44fb
c5b419932c4b3cc348f68/rootfs","created":"2024-01-08T20:17:05.232105635Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-819954_94e52f63c4e823859e27d9606ecfb426","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"94e52f63c4e823859e27d9606ecfb426"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419","pid":1917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419","roo
tfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419/rootfs","created":"2024-01-08T20:17:28.197457923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032","io.kubernetes.cri.sandbox-name":"kindnet-8f54v","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c8ee3b5c-77b8-49cd-be3b-2fed766a1681"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1","pid":2944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d073a74d1e9ca42b70d578b403c76101a2144bfc923bc7898522
17c5a8b9cfd1/rootfs","created":"2024-01-08T20:17:58.630550052Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1a29cdd1-3689-4c64-b1f6-78051dd0f4cd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","pid":1144,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d/rootfs","created":"2024-01-08T20:17:05.143485506Z","annotations":{"io.kubernetes.cri.co
ntainer-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-819954_5d770ae7c05d7b13bc2e5621283713ab","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5d770ae7c05d7b13bc2e5621283713ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","pid":2101,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c/roo
tfs","created":"2024-01-08T20:17:40.348140186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-kfq5h_096d598a-b3a8-447b-89e0-f8d6788334d5","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-kfq5h","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"096d598a-b3a8-447b-89e0-f8d6788334d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","pid":1133,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940/rootfs","created":"2024-01-08T20:17:05.11502994Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-819954_a5ddae75ae78b04ccb699098c29e5635","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-819954","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5ddae75ae78b04ccb699098c29e5635"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","pid":1805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc
004c81ec46e82d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d/rootfs","created":"2024-01-08T20:17:27.837270213Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-rkdqg_b744ff0b-b217-4f49-8af0-76952412ab2b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-rkdqg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b744ff0b-b217-4f49-8af0-76952412ab2b"},"owner":"root"}]
	I0108 20:18:03.008835  680424 cri.go:126] list returned 16 containers
	I0108 20:18:03.008844  680424 cri.go:129] container: {ID:09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c Status:running}
	I0108 20:18:03.008863  680424 cri.go:135] skipping {09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c running}: state = "running", want "paused"
	I0108 20:18:03.008872  680424 cri.go:129] container: {ID:0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb Status:running}
	I0108 20:18:03.008878  680424 cri.go:135] skipping {0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb running}: state = "running", want "paused"
	I0108 20:18:03.008888  680424 cri.go:129] container: {ID:23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 Status:running}
	I0108 20:18:03.008894  680424 cri.go:135] skipping {23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 running}: state = "running", want "paused"
	I0108 20:18:03.008899  680424 cri.go:129] container: {ID:28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb Status:running}
	I0108 20:18:03.008905  680424 cri.go:131] skipping 28f17cb160b77232a13526889e5c6772c872303e940fd87eaf8f3fe3c2bae7fb - not in ps
	I0108 20:18:03.008909  680424 cri.go:129] container: {ID:30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032 Status:running}
	I0108 20:18:03.008915  680424 cri.go:131] skipping 30df59d2570db6e96be4739feb5118046f32bf16c013dde0932cae678dfea032 - not in ps
	I0108 20:18:03.008919  680424 cri.go:129] container: {ID:4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 Status:running}
	I0108 20:18:03.008925  680424 cri.go:135] skipping {4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 running}: state = "running", want "paused"
	I0108 20:18:03.008930  680424 cri.go:129] container: {ID:6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 Status:running}
	I0108 20:18:03.008936  680424 cri.go:135] skipping {6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 running}: state = "running", want "paused"
	I0108 20:18:03.008941  680424 cri.go:129] container: {ID:8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441 Status:running}
	I0108 20:18:03.008963  680424 cri.go:131] skipping 8d8f6b4ff0fbedc09da086c68c072b77e82fd7e632269f2d98c18c810891c441 - not in ps
	I0108 20:18:03.008968  680424 cri.go:129] container: {ID:af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 Status:running}
	I0108 20:18:03.008976  680424 cri.go:135] skipping {af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 running}: state = "running", want "paused"
	I0108 20:18:03.008984  680424 cri.go:129] container: {ID:b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68 Status:running}
	I0108 20:18:03.009020  680424 cri.go:131] skipping b2410c96230109e2e646b7748ed2c4b653317ef44fbc5b419932c4b3cc348f68 - not in ps
	I0108 20:18:03.009025  680424 cri.go:129] container: {ID:bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 Status:running}
	I0108 20:18:03.009031  680424 cri.go:135] skipping {bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 running}: state = "running", want "paused"
	I0108 20:18:03.009038  680424 cri.go:129] container: {ID:d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 Status:running}
	I0108 20:18:03.009044  680424 cri.go:135] skipping {d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 running}: state = "running", want "paused"
	I0108 20:18:03.009049  680424 cri.go:129] container: {ID:e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d Status:running}
	I0108 20:18:03.009054  680424 cri.go:131] skipping e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d - not in ps
	I0108 20:18:03.009059  680424 cri.go:129] container: {ID:ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c Status:running}
	I0108 20:18:03.009064  680424 cri.go:131] skipping ef8377f1664c99f807e5fcd3069ef09526d834406082b958e892f57d5969801c - not in ps
	I0108 20:18:03.009068  680424 cri.go:129] container: {ID:f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940 Status:running}
	I0108 20:18:03.009073  680424 cri.go:131] skipping f16a2df5f2aebaf2804809e3b07b7d1abe7cc50736976d8d49d230c22ee77940 - not in ps
	I0108 20:18:03.009078  680424 cri.go:129] container: {ID:f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d Status:running}
	I0108 20:18:03.009084  680424 cri.go:131] skipping f9bda501df60f9501acec983c4eff6a3ce42aacc0c7c2f54fc004c81ec46e82d - not in ps
	I0108 20:18:03.009148  680424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:18:03.027274  680424 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 20:18:03.027286  680424 kubeadm.go:636] restartCluster start
	I0108 20:18:03.027343  680424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 20:18:03.038603  680424 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:03.039158  680424 kubeconfig.go:92] found "functional-819954" server: "https://192.168.49.2:8441"
	I0108 20:18:03.040709  680424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 20:18:03.052615  680424 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-01-08 20:16:57.502159996 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-01-08 20:18:02.365661107 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0108 20:18:03.052624  680424 kubeadm.go:1135] stopping kube-system containers ...
	I0108 20:18:03.052641  680424 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 20:18:03.052710  680424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:18:03.103938  680424 cri.go:89] found id: "d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1"
	I0108 20:18:03.103951  680424 cri.go:89] found id: "4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47"
	I0108 20:18:03.103970  680424 cri.go:89] found id: "bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419"
	I0108 20:18:03.103974  680424 cri.go:89] found id: "6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84"
	I0108 20:18:03.103977  680424 cri.go:89] found id: "d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	I0108 20:18:03.103981  680424 cri.go:89] found id: "23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48"
	I0108 20:18:03.103984  680424 cri.go:89] found id: "af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861"
	I0108 20:18:03.103987  680424 cri.go:89] found id: "09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c"
	I0108 20:18:03.103991  680424 cri.go:89] found id: "0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb"
	I0108 20:18:03.103996  680424 cri.go:89] found id: ""
	I0108 20:18:03.104001  680424 cri.go:234] Stopping containers: [d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb]
	I0108 20:18:03.104060  680424 ssh_runner.go:195] Run: which crictl
	I0108 20:18:03.109069  680424 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb
	I0108 20:18:08.374347  680424 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb: (5.265240873s)
	W0108 20:18:08.374414  680424 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1 4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47 bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419 6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84 d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e 23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48 af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861 09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c 0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb: Process exited with status 1
	stdout:
	d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1
	4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47
	bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419
	6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84
	
	stderr:
	E0108 20:18:08.370998    3382 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": not found" containerID="d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e"
	time="2024-01-08T20:18:08Z" level=fatal msg="stopping the container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2a0915aa121effd9b4a2d92f6f7ccc18d675f7e811ea80f7514103e7782ca4e\": not found"
	I0108 20:18:08.374497  680424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 20:18:08.457602  680424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:18:08.469679  680424 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 20:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 20:17 /etc/kubernetes/scheduler.conf
	
	I0108 20:18:08.469753  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0108 20:18:08.481377  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0108 20:18:08.492216  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0108 20:18:08.503457  680424 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:08.503514  680424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 20:18:08.513986  680424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0108 20:18:08.524732  680424 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:18:08.524786  680424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 20:18:08.535465  680424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:18:08.546706  680424 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 20:18:08.546723  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:08.609948  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:10.789213  680424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.179241477s)
	I0108 20:18:10.789231  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:10.997287  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.088107  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.185343  680424 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:18:11.185410  680424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:18:11.203104  680424 api_server.go:72] duration metric: took 17.758159ms to wait for apiserver process to appear ...
	I0108 20:18:11.203118  680424 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:18:11.203149  680424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0108 20:18:11.213592  680424 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0108 20:18:11.229628  680424 api_server.go:141] control plane version: v1.28.4
	I0108 20:18:11.229645  680424 api_server.go:131] duration metric: took 26.52153ms to wait for apiserver health ...
	I0108 20:18:11.229653  680424 cni.go:84] Creating CNI manager for ""
	I0108 20:18:11.229659  680424 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:18:11.231542  680424 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:18:11.233706  680424 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:18:11.238795  680424 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:18:11.238807  680424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:18:11.272031  680424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:18:11.665927  680424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:18:11.674252  680424 system_pods.go:59] 8 kube-system pods found
	I0108 20:18:11.674268  680424 system_pods.go:61] "coredns-5dd5756b68-kfq5h" [096d598a-b3a8-447b-89e0-f8d6788334d5] Running
	I0108 20:18:11.674272  680424 system_pods.go:61] "etcd-functional-819954" [7cfcab5f-d05b-43ce-aaf4-936977eda08c] Running
	I0108 20:18:11.674276  680424 system_pods.go:61] "kindnet-8f54v" [c8ee3b5c-77b8-49cd-be3b-2fed766a1681] Running
	I0108 20:18:11.674281  680424 system_pods.go:61] "kube-apiserver-functional-819954" [7c24bde5-5c62-443f-95c3-d23a713d71bd] Running
	I0108 20:18:11.674290  680424 system_pods.go:61] "kube-controller-manager-functional-819954" [c56d710c-a540-427b-9b64-031140796e4f] Running
	I0108 20:18:11.674294  680424 system_pods.go:61] "kube-proxy-rkdqg" [b744ff0b-b217-4f49-8af0-76952412ab2b] Running
	I0108 20:18:11.674299  680424 system_pods.go:61] "kube-scheduler-functional-819954" [c10ec52c-2a67-4ed9-80bc-7c592b59b99c] Running
	I0108 20:18:11.674306  680424 system_pods.go:61] "storage-provisioner" [1a29cdd1-3689-4c64-b1f6-78051dd0f4cd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 20:18:11.674312  680424 system_pods.go:74] duration metric: took 8.374624ms to wait for pod list to return data ...
	I0108 20:18:11.674320  680424 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:18:11.677575  680424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:18:11.677594  680424 node_conditions.go:123] node cpu capacity is 2
	I0108 20:18:11.677603  680424 node_conditions.go:105] duration metric: took 3.279392ms to run NodePressure ...
	I0108 20:18:11.677625  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:18:11.896459  680424 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 20:18:11.904865  680424 retry.go:31] will retry after 336.700738ms: kubelet not initialised
	I0108 20:18:12.249628  680424 kubeadm.go:787] kubelet initialised
	I0108 20:18:12.249640  680424 kubeadm.go:788] duration metric: took 353.167628ms waiting for restarted kubelet to initialise ...
	I0108 20:18:12.249656  680424 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:18:12.278777  680424 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.282756  680424 pod_ready.go:97] error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	I0108 20:18:14.282790  680424 pod_ready.go:81] duration metric: took 2.003985471s waiting for pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.282801  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kfq5h" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:37286->192.168.49.2:8441: read: connection reset by peer
	I0108 20:18:14.282822  680424 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283162  680424 pod_ready.go:97] error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283173  680424 pod_ready.go:81] duration metric: took 343.571µs waiting for pod "etcd-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283181  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283202  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283501  680424 pod_ready.go:97] error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283511  680424 pod_ready.go:81] duration metric: took 303.547µs waiting for pod "kube-apiserver-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283521  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283546  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.283950  680424 pod_ready.go:97] error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283968  680424 pod_ready.go:81] duration metric: took 407.242µs waiting for pod "kube-controller-manager-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.283981  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.283996  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rkdqg" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.284223  680424 pod_ready.go:97] error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284231  680424 pod_ready.go:81] duration metric: took 230.332µs waiting for pod "kube-proxy-rkdqg" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.284238  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rkdqg" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284251  680424 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-819954" in "kube-system" namespace to be "Ready" ...
	I0108 20:18:14.284467  680424 pod_ready.go:97] error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284474  680424 pod_ready.go:81] duration metric: took 217.541µs waiting for pod "kube-scheduler-functional-819954" in "kube-system" namespace to be "Ready" ...
	E0108 20:18:14.284480  680424 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-819954" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.284496  680424 pod_ready.go:38] duration metric: took 2.03483066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:18:14.284510  680424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0108 20:18:14.293627  680424 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0108 20:18:14.293642  680424 kubeadm.go:640] restartCluster took 11.266351717s
	I0108 20:18:14.293648  680424 kubeadm.go:406] StartCluster complete in 11.361257745s
	I0108 20:18:14.293671  680424 settings.go:142] acquiring lock: {Name:mkb63cd96d7a856f465b0592d8a592dc849b8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:14.293726  680424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:18:14.294383  680424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/kubeconfig: {Name:mk40e5900c8ed31a9e7a0515010236c17752c8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:18:14.295469  680424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:18:14.295728  680424 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:14.295766  680424 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:18:14.295825  680424 addons.go:69] Setting storage-provisioner=true in profile "functional-819954"
	I0108 20:18:14.295836  680424 addons.go:237] Setting addon storage-provisioner=true in "functional-819954"
	W0108 20:18:14.295841  680424 addons.go:246] addon storage-provisioner should already be in state true
	I0108 20:18:14.295896  680424 host.go:66] Checking if "functional-819954" exists ...
	I0108 20:18:14.296279  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	W0108 20:18:14.296510  680424 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-819954" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.296521  680424 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.296548  680424 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:18:14.301958  680424 out.go:177] * Verifying Kubernetes components...
	I0108 20:18:14.296882  680424 addons.go:69] Setting default-storageclass=true in profile "functional-819954"
	I0108 20:18:14.303886  680424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-819954"
	I0108 20:18:14.303956  680424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:18:14.304342  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:14.328200  680424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:18:14.330160  680424 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:18:14.330172  680424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:18:14.330234  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:14.350216  680424 addons.go:237] Setting addon default-storageclass=true in "functional-819954"
	W0108 20:18:14.350227  680424 addons.go:246] addon default-storageclass should already be in state true
	I0108 20:18:14.350248  680424 host.go:66] Checking if "functional-819954" exists ...
	I0108 20:18:14.350700  680424 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:14.387943  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:14.403990  680424 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:18:14.404002  680424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:18:14.404061  680424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:14.425786  680424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	E0108 20:18:14.436656  680424 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0108 20:18:14.436680  680424 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0108 20:18:14.436696  680424 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0108 20:18:14.436847  680424 node_ready.go:35] waiting up to 6m0s for node "functional-819954" to be "Ready" ...
	I0108 20:18:14.437355  680424 node_ready.go:53] error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:14.437365  680424 node_ready.go:38] duration metric: took 508.591µs waiting for node "functional-819954" to be "Ready" ...
	I0108 20:18:14.440079  680424 out.go:177] 
	W0108 20:18:14.441826  680424 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-819954": Get "https://192.168.49.2:8441/api/v1/nodes/functional-819954": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.441847  680424 out.go:239] * 
	W0108 20:18:14.443217  680424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:18:14.445094  680424 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5db7d28fe46b8       04b4eaa3d3db8       13 seconds ago       Running             kindnet-cni               1                   30df59d2570db       kindnet-8f54v
	21ef5a21b398d       3ca3ca488cf13       13 seconds ago       Running             kube-proxy                1                   f9bda501df60f       kube-proxy-rkdqg
	72f1ea2bc29cc       ba04bb24b9575       13 seconds ago       Running             storage-provisioner       2                   28f17cb160b77       storage-provisioner
	158dd083763bc       97e04611ad434       13 seconds ago       Running             coredns                   1                   ef8377f1664c9       coredns-5dd5756b68-kfq5h
	1a740457ab4ba       04b4c447bb9d4       13 seconds ago       Exited              kube-apiserver            1                   908b9cbce2bdf       kube-apiserver-functional-819954
	d073a74d1e9ca       ba04bb24b9575       27 seconds ago       Exited              storage-provisioner       1                   28f17cb160b77       storage-provisioner
	4d6d515dfb3db       97e04611ad434       45 seconds ago       Exited              coredns                   0                   ef8377f1664c9       coredns-5dd5756b68-kfq5h
	bdd9436308352       04b4eaa3d3db8       58 seconds ago       Exited              kindnet-cni               0                   30df59d2570db       kindnet-8f54v
	6eafd65391bc0       3ca3ca488cf13       58 seconds ago       Exited              kube-proxy                0                   f9bda501df60f       kube-proxy-rkdqg
	23cacc81bdb79       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   b2410c9623010       kube-controller-manager-functional-819954
	af97d71bc53af       05c284c929889       About a minute ago   Running             kube-scheduler            0                   8d8f6b4ff0fbe       kube-scheduler-functional-819954
	0a6b7c1f396e6       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   f16a2df5f2aeb       etcd-functional-819954
	
	
	==> containerd <==
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752518809Z" level=info msg="shim disconnected" id=1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752787640Z" level=warning msg="cleaning up after shim disconnected" id=1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550 namespace=k8s.io
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.752905998Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.776524519Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3918 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:12 functional-819954 containerd[3186]: time="2024-01-08T20:18:12.796934046Z" level=info msg="StartContainer for \"21ef5a21b398de4c92f7c23b2a7438f018e4b9cb6dd4003c3394b3afab228ab7\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.241548449Z" level=info msg="StopContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" with timeout 2 (s)"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.242121671Z" level=info msg="Stop container \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" with signal terminated"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272676303Z" level=info msg="shim disconnected" id=e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272871986Z" level=warning msg="cleaning up after shim disconnected" id=e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d namespace=k8s.io
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.272892966Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.294244263Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321472020Z" level=info msg="shim disconnected" id=09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321805301Z" level=warning msg="cleaning up after shim disconnected" id=09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c namespace=k8s.io
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.321837958Z" level=info msg="cleaning up dead shim"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.333361005Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4114 runtime=io.containerd.runc.v2\n"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.336100709Z" level=info msg="StopContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.337272327Z" level=info msg="StopPodSandbox for \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.337466344Z" level=info msg="Container to stop \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.341124236Z" level=info msg="TearDown network for sandbox \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\" successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.341240715Z" level=info msg="StopPodSandbox for \"e5b22eedc779dc4bb01b091b678361be7cc65ecda674614997fcde869006f67d\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.353288312Z" level=info msg="RemoveContainer for \"c0ab552c13365935acc32ea8138dac9c7273050ed9227c0a302aa3567b3c95af\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.360653188Z" level=info msg="RemoveContainer for \"c0ab552c13365935acc32ea8138dac9c7273050ed9227c0a302aa3567b3c95af\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.362379066Z" level=info msg="RemoveContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\""
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.367066546Z" level=info msg="RemoveContainer for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" returns successfully"
	Jan 08 20:18:13 functional-819954 containerd[3186]: time="2024-01-08T20:18:13.367952997Z" level=error msg="ContainerStatus for \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09a4c5a6880db340af280e0a5f933dbb40c375a2febba025d2c34fdaf118385c\": not found"
	
	
	==> coredns [158dd083763bcd5814de3a45796be388b9d8354125ae14dd81467887a246f40b] <==
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34838 - 17254 "HINFO IN 8829591644871294786.2221726687050420815. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015153903s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=480": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=462": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [4d6d515dfb3db4ccb2a9a85db845652d6739f39edcb2da94aa6b00bcd82f5a47] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36655 - 51930 "HINFO IN 1864645754049744283.7379797473336989881. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012250114s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000867] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001057] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001228] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +0.003300] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=000000007aebacca
	[  +0.001230] FS-Cache: O-key=[8] 'e63a5c0100000000'
	[  +0.000802] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000001ba12843
	[  +0.001177] FS-Cache: N-key=[8] 'e63a5c0100000000'
	[  +2.625464] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001086] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=0000000019c2b840
	[  +0.001196] FS-Cache: O-key=[8] 'e53a5c0100000000'
	[  +0.000801] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001152] FS-Cache: N-key=[8] 'e53a5c0100000000'
	[  +0.329983] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001107] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000aa56b1d3
	[  +0.001269] FS-Cache: O-key=[8] 'ee3a5c0100000000'
	[  +0.000826] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000003b0b7e1f
	[  +0.001169] FS-Cache: N-key=[8] 'ee3a5c0100000000'
	[Jan 8 19:41] hrtimer: interrupt took 4780855 ns
	
	
	==> etcd [0a6b7c1f396e6129a18c865967c88baac1d9a08c849379ef63b0b7c86a0ae0fb] <==
	{"level":"info","ts":"2024-01-08T20:17:05.460103Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-08T20:17:05.460631Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470221Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470294Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470305Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:17:05.470636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-08T20:17:05.470715Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-08T20:17:05.617396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-08T20:17:05.617491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.617517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:17:05.618496Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.629366Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-819954 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:17:05.633162Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633264Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633299Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:17:05.633313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:17:05.633541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:17:05.63369Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:17:05.633801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:17:05.634334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:17:05.641975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 20:18:26 up  3:00,  0 users,  load average: 1.25, 1.19, 1.42
	Linux functional-819954 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [5db7d28fe46b8deef1dfbdb0cafda5dacd79814ee5e68f8967e2918879074683] <==
	I0108 20:18:12.806820       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:18:12.807086       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:18:12.807315       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:18:12.897158       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:18:12.897363       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:18:13.210331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:18:13.210371       1 main.go:227] handling current node
	I0108 20:18:23.311310       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0108 20:18:23.311502       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0108 20:18:24.311955       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0108 20:18:26.313494       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> kindnet [bdd943630835222f396bbce64a336004de15ea168a052d286adf0141a16d1419] <==
	I0108 20:17:28.306230       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:17:28.306523       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:17:28.306745       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:17:28.306840       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:17:28.306960       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:17:28.797755       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:28.797794       1 main.go:227] handling current node
	I0108 20:17:38.804892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:38.804922       1 main.go:227] handling current node
	I0108 20:17:48.815546       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:48.815578       1 main.go:227] handling current node
	I0108 20:17:58.826405       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:58.826506       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1a740457ab4ba959baf05cd1874753ed16e21ef8df1f44d273fa101b40925550] <==
	I0108 20:18:12.628602       1 options.go:220] external host was not specified, using 192.168.49.2
	I0108 20:18:12.630349       1 server.go:148] Version: v1.28.4
	I0108 20:18:12.630375       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0108 20:18:12.630625       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [23cacc81bdb790c18ce2452fda1d85158e6d2cc36758f291401971a94c6fda48] <==
	I0108 20:17:25.493959       1 shared_informer.go:318] Caches are synced for daemon sets
	I0108 20:17:25.517482       1 shared_informer.go:318] Caches are synced for attach detach
	I0108 20:17:25.553343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.538152ms"
	I0108 20:17:25.593727       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8f54v"
	I0108 20:17:25.604908       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rkdqg"
	I0108 20:17:25.639245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.807962ms"
	I0108 20:17:25.640124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="275.337µs"
	I0108 20:17:25.805998       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0108 20:17:25.879048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cgl4x"
	I0108 20:17:25.901197       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 20:17:25.901352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.150922ms"
	I0108 20:17:25.913471       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 20:17:25.913502       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0108 20:17:25.922245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.844787ms"
	I0108 20:17:25.922325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.706µs"
	I0108 20:17:26.994510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.738µs"
	I0108 20:17:27.013443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.764µs"
	I0108 20:17:40.517212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.753µs"
	I0108 20:17:41.531172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.361473ms"
	I0108 20:17:41.531382       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0108 20:17:41.532615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.361µs"
	I0108 20:18:12.301322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.88429ms"
	I0108 20:18:12.301498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.235µs"
	E0108 20:18:25.449505       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.49.2:8441/api": dial tcp 192.168.49.2:8441: connect: connection refused
	I0108 20:18:25.908021       1 garbagecollector.go:818] "failed to discover preferred resources" error="Get \"https://192.168.49.2:8441/api\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [21ef5a21b398de4c92f7c23b2a7438f018e4b9cb6dd4003c3394b3afab228ab7] <==
	I0108 20:18:12.911184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:18:13.011357       1 shared_informer.go:318] Caches are synced for node config
	I0108 20:18:13.011399       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:18:13.011426       1 shared_informer.go:318] Caches are synced for endpoint slice config
	W0108 20:18:13.287280       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:13.287503       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:13.287654       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0108 20:18:14.138741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.138805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.184373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.184435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:14.720926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:14.720975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:16.466907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:16.466950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:16.867603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:16.867651       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:17.516931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:17.516980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:20.388441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:20.388499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:22.819745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:22.819793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-819954&resourceVersion=475": dial tcp 192.168.49.2:8441: connect: connection refused
	W0108 20:18:23.593019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	E0108 20:18:23.593066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=480": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [6eafd65391bc08e5b8239b1fb2c4477d0c91e31c6f20071050365768b2954f84] <==
	I0108 20:17:27.988534       1 server_others.go:69] "Using iptables proxy"
	I0108 20:17:28.010073       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 20:17:28.059460       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:17:28.071657       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:17:28.071704       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:17:28.071712       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:17:28.071795       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:17:28.072079       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:17:28.072090       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:17:28.073821       1 config.go:188] "Starting service config controller"
	I0108 20:17:28.073870       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:17:28.073917       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:17:28.073922       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:17:28.075775       1 config.go:315] "Starting node config controller"
	I0108 20:17:28.075791       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:17:28.174021       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:17:28.174074       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:17:28.176053       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [af97d71bc53afba0fa17002bb52d0f9545e7978ca3be9452b5c6c67296e51861] <==
	W0108 20:17:09.284956       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:17:09.284982       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:17:09.285114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:17:09.285141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:17:09.285321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:17:09.285344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:17:09.285465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:17:09.285531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:17:09.285618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:17:09.285674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:17:09.285735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:17:09.285755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 20:17:10.107717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:17:10.108063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 20:17:10.129275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.129536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:17:10.203396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.203638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:17:10.220429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:17:10.220655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:17:10.354507       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:17:10.354736       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:17:10.394755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:17:10.394807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0108 20:17:12.772331       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.225874    3571 status_manager.go:853] "Failed to get status for pod" podUID="f878c4636850ccf2e5b70c6db6ff0087" pod="kube-system/kube-scheduler-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.226052    3571 status_manager.go:853] "Failed to get status for pod" podUID="a5ddae75ae78b04ccb699098c29e5635" pod="kube-system/etcd-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.226210    3571 status_manager.go:853] "Failed to get status for pod" podUID="27b4a77c3ebefa78b9f28bd7e336085d" pod="kube-system/kube-apiserver-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.226396    3571 status_manager.go:853] "Failed to get status for pod" podUID="94e52f63c4e823859e27d9606ecfb426" pod="kube-system/kube-controller-manager-functional-819954" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-819954\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.226557    3571 status_manager.go:853] "Failed to get status for pod" podUID="1a29cdd1-3689-4c64-b1f6-78051dd0f4cd" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.226720    3571 status_manager.go:853] "Failed to get status for pod" podUID="b744ff0b-b217-4f49-8af0-76952412ab2b" pod="kube-system/kube-proxy-rkdqg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rkdqg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.227466    3571 status_manager.go:853] "Failed to get status for pod" podUID="096d598a-b3a8-447b-89e0-f8d6788334d5" pod="kube-system/coredns-5dd5756b68-kfq5h" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kfq5h\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.349269    3571 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-819954\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-819954?resourceVersion=0&timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.349479    3571 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-819954\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.349648    3571 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-819954\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.349797    3571 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-819954\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.350691    3571 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-819954\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.350715    3571 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.448723    3571 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.448987    3571 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.449273    3571 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.449480    3571 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.449706    3571 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: I0108 20:18:21.449729    3571 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.449938    3571 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="200ms"
	Jan 08 20:18:21 functional-819954 kubelet[3571]: E0108 20:18:21.651362    3571 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="400ms"
	Jan 08 20:18:22 functional-819954 kubelet[3571]: E0108 20:18:22.051928    3571 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="800ms"
	Jan 08 20:18:22 functional-819954 kubelet[3571]: E0108 20:18:22.853275    3571 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="1.6s"
	Jan 08 20:18:24 functional-819954 kubelet[3571]: E0108 20:18:24.454551    3571 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-819954?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="3.2s"
	Jan 08 20:18:27 functional-819954 kubelet[3571]: E0108 20:18:27.322848    3571 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-819954.17a878a15277a0f4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-819954", UID:"5d770ae7c05d7b13bc2e5621283713ab", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Stopping container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Hos
t:"functional-819954"}, FirstTimestamp:time.Date(2024, time.January, 8, 20, 18, 13, 228372212, time.Local), LastTimestamp:time.Date(2024, time.January, 8, 20, 18, 13, 228372212, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-819954"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	
	
	==> storage-provisioner [72f1ea2bc29cc4386d42b1b63cf02ae0fe73685627fd8d9cd52eea91edf7d50c] <==
	I0108 20:18:12.751809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:18:12.769640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:18:12.769735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0108 20:18:16.226427       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0108 20:18:20.484935       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0108 20:18:24.080349       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0108 20:18:27.131761       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d073a74d1e9ca42b70d578b403c76101a2144bfc923bc789852217c5a8b9cfd1] <==
	I0108 20:17:58.677934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:17:58.713738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:17:58.714002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:17:58.723448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:17:58.725753       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9!
	I0108 20:17:58.727292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca438388-635f-4015-b991-0cb05b966748", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9 became leader
	I0108 20:17:58.826378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-819954_2f0cdb41-5f62-494c-9b9e-f0fd16f31be9!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:18:26.645276  683433 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-819954 -n functional-819954
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-819954 -n functional-819954: exit status 2 (445.544545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-819954" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image load --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr
E0108 20:18:27.358319  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 image load --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr: (3.897721621s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-819954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-819954 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1436: (dbg) Non-zero exit: kubectl --context functional-819954 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (90.217708ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1442: failed to create hello-node deployment with this command "kubectl --context functional-819954 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 service list
functional_test.go:1458: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 service list: exit status 119 (469.962121ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-819954"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-819954"

                                                
                                                
** /stderr **
functional_test.go:1460: failed to do service list. args "out/minikube-linux-arm64 -p functional-819954 service list" : exit status 119
functional_test.go:1463: expected 'service list' to contain *hello-node* but got -"* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-819954\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image load --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 image load --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr: (3.824657923s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-819954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 service list -o json
functional_test.go:1488: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 service list -o json: exit status 119 (404.23081ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-819954"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-819954"

                                                
                                                
** /stderr **
functional_test.go:1490: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-819954 service list -o json": exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 service --namespace=default --https --url hello-node: exit status 119 (363.458802ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-819954"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-819954"

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-linux-arm64 -p functional-819954 service --namespace=default --https --url hello-node" : exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (5.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 service hello-node --url --format={{.IP}}: exit status 115 (5.506355901s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-819954 service hello-node --url --format={{.IP}}": exit status 115
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (5.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.856689287s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-819954
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image load --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 image load --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr: (3.525463508s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-819954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 service hello-node --url: exit status 115 (473.333032ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-linux-arm64 -p functional-819954 service hello-node --url": exit status 115
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image save gcr.io/google-containers/addon-resizer:functional-819954 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0108 20:18:42.349151  685162 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:18:42.349737  685162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:18:42.349758  685162 out.go:309] Setting ErrFile to fd 2...
	I0108 20:18:42.349765  685162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:18:42.350159  685162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:18:42.351369  685162 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:42.351570  685162 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:18:42.352410  685162 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
	I0108 20:18:42.379688  685162 ssh_runner.go:195] Run: systemctl --version
	I0108 20:18:42.379797  685162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
	I0108 20:18:42.409642  685162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
	I0108 20:18:42.512434  685162 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0108 20:18:42.512571  685162 cache_images.go:254] Failed to load cached images for profile functional-819954. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0108 20:18:42.512606  685162 cache_images.go:262] succeeded pushing to: 
	I0108 20:18:42.512615  685162 cache_images.go:263] failed pushing to: functional-819954

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (59.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-918006 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-918006 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.924090887s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-918006 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-918006 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1b8d32f1-faaf-45d2-91e4-7d84c4647f96] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1b8d32f1-faaf-45d2-91e4-7d84c4647f96] Running
E0108 20:22:46.390832  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003210287s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-918006 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.023167718s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons disable ingress-dns --alsologtostderr -v=1
E0108 20:23:14.080376  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons disable ingress-dns --alsologtostderr -v=1: (12.244981312s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons disable ingress --alsologtostderr -v=1: (7.54509629s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-918006
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-918006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddf04b93cf2361cb3f063bfe81a65e33f09656996d02895bca9caf5a80900ab3",
	        "Created": "2024-01-08T20:20:55.408388837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 689923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:20:55.77469087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/ddf04b93cf2361cb3f063bfe81a65e33f09656996d02895bca9caf5a80900ab3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddf04b93cf2361cb3f063bfe81a65e33f09656996d02895bca9caf5a80900ab3/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddf04b93cf2361cb3f063bfe81a65e33f09656996d02895bca9caf5a80900ab3/hosts",
	        "LogPath": "/var/lib/docker/containers/ddf04b93cf2361cb3f063bfe81a65e33f09656996d02895bca9caf5a80900ab3/ddf04b93cf2361cb3f063bfe81a65e33f09656996d02895bca9caf5a80900ab3-json.log",
	        "Name": "/ingress-addon-legacy-918006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-918006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-918006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c5a8dd21387ced2eb52953ef18263631981cb01151523d65dc664190f9ccfec6-init/diff:/var/lib/docker/overlay2/5440a5a336c464ed564efc18a632104b770481b7cc483f7cadb6269a7b019538/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5a8dd21387ced2eb52953ef18263631981cb01151523d65dc664190f9ccfec6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5a8dd21387ced2eb52953ef18263631981cb01151523d65dc664190f9ccfec6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5a8dd21387ced2eb52953ef18263631981cb01151523d65dc664190f9ccfec6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-918006",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-918006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-918006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-918006",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-918006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "888f79a13c995355a3ddc0b577a6840cc88cda86f2041a0f8c87b8b7a19a24e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/888f79a13c99",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-918006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ddf04b93cf23",
	                        "ingress-addon-legacy-918006"
	                    ],
	                    "NetworkID": "e50677977df065f1b43bc8cdbeeed763819302c782443f5b1e3c2d4b904d344c",
	                    "EndpointID": "0f3cbdb02ca3b9451580f0b4891a14c3819e0de81b71623144469bbd1d0d8ad1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-918006 -n ingress-addon-legacy-918006
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-918006 logs -n 25: (1.467493423s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-819954                                                  | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-819954                                                  | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| ssh            | functional-819954 ssh findmnt                                         | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC |                     |
	|                | -T /mount1                                                            |                             |         |         |                     |                     |
	| ssh            | functional-819954 ssh findmnt                                         | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | -T /mount1                                                            |                             |         |         |                     |                     |
	| ssh            | functional-819954 ssh findmnt                                         | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | -T /mount2                                                            |                             |         |         |                     |                     |
	| ssh            | functional-819954 ssh findmnt                                         | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | -T /mount3                                                            |                             |         |         |                     |                     |
	| mount          | -p functional-819954                                                  | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC |                     |
	|                | --kill=true                                                           |                             |         |         |                     |                     |
	| update-context | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| image          | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | image ls --format short                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | image ls --format yaml                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| ssh            | functional-819954 ssh pgrep                                           | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC |                     |
	|                | buildkitd                                                             |                             |         |         |                     |                     |
	| image          | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | image ls --format json                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-819954                                                     | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | image ls --format table                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-819954 image build -t                                      | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|                | localhost/my-image:functional-819954                                  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                             |         |         |                     |                     |
	| image          | functional-819954 image ls                                            | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	| delete         | -p functional-819954                                                  | functional-819954           | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	| start          | -p ingress-addon-legacy-918006                                        | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:22 UTC |
	|                | --kubernetes-version=v1.18.20                                         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                  |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                        |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-918006                                           | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | addons enable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-918006                                           | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | addons enable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-918006                                           | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | ssh curl -s http://127.0.0.1/                                         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-918006 ip                                        | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	| addons         | ingress-addon-legacy-918006                                           | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:23 UTC | 08 Jan 24 20:23 UTC |
	|                | addons disable ingress-dns                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-918006                                           | ingress-addon-legacy-918006 | jenkins | v1.32.0 | 08 Jan 24 20:23 UTC | 08 Jan 24 20:23 UTC |
	|                | addons disable ingress                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:20:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:20:23.666062  689469 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:20:23.666272  689469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:20:23.666298  689469 out.go:309] Setting ErrFile to fd 2...
	I0108 20:20:23.666319  689469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:20:23.666619  689469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:20:23.667164  689469 out.go:303] Setting JSON to false
	I0108 20:20:23.668049  689469 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10964,"bootTime":1704734260,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:20:23.668151  689469 start.go:138] virtualization:  
	I0108 20:20:23.671273  689469 out.go:177] * [ingress-addon-legacy-918006] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:20:23.673784  689469 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:20:23.673870  689469 notify.go:220] Checking for updates...
	I0108 20:20:23.677226  689469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:20:23.679393  689469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:20:23.681483  689469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:20:23.683689  689469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:20:23.686032  689469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:20:23.688379  689469 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:20:23.718644  689469 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:20:23.718770  689469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:20:23.814844  689469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 20:20:23.803862958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:20:23.814944  689469 docker.go:295] overlay module found
	I0108 20:20:23.817750  689469 out.go:177] * Using the docker driver based on user configuration
	I0108 20:20:23.819957  689469 start.go:298] selected driver: docker
	I0108 20:20:23.819982  689469 start.go:902] validating driver "docker" against <nil>
	I0108 20:20:23.819998  689469 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:20:23.820702  689469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:20:23.886791  689469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 20:20:23.876570482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:20:23.886949  689469 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:20:23.887205  689469 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:20:23.889649  689469 out.go:177] * Using Docker driver with root privileges
	I0108 20:20:23.891797  689469 cni.go:84] Creating CNI manager for ""
	I0108 20:20:23.891817  689469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:20:23.891829  689469 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:20:23.891847  689469 start_flags.go:323] config:
	{Name:ingress-addon-legacy-918006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-918006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:20:23.894315  689469 out.go:177] * Starting control plane node ingress-addon-legacy-918006 in cluster ingress-addon-legacy-918006
	I0108 20:20:23.896216  689469 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:20:23.898255  689469 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:20:23.900372  689469 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0108 20:20:23.900451  689469 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:20:23.917950  689469 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:20:23.917976  689469 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:20:23.969433  689469 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0108 20:20:23.969474  689469 cache.go:56] Caching tarball of preloaded images
	I0108 20:20:23.969644  689469 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0108 20:20:23.972042  689469 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 20:20:23.974508  689469 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:20:24.094223  689469 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0108 20:20:47.513807  689469 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:20:47.513915  689469 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:20:48.704373  689469 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0108 20:20:48.704747  689469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/config.json ...
	I0108 20:20:48.704781  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/config.json: {Name:mk25766b71c18d74e3bb8cf6285bc5ef09a747d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:20:48.704972  689469 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:20:48.705039  689469 start.go:365] acquiring machines lock for ingress-addon-legacy-918006: {Name:mkf04db2ae01726fb7091b860f479b80cc80d223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:20:48.705106  689469 start.go:369] acquired machines lock for "ingress-addon-legacy-918006" in 49.64µs
	I0108 20:20:48.705128  689469 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-918006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-918006 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:20:48.705197  689469 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:20:48.708165  689469 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 20:20:48.708432  689469 start.go:159] libmachine.API.Create for "ingress-addon-legacy-918006" (driver="docker")
	I0108 20:20:48.708461  689469 client.go:168] LocalClient.Create starting
	I0108 20:20:48.708536  689469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem
	I0108 20:20:48.708570  689469 main.go:141] libmachine: Decoding PEM data...
	I0108 20:20:48.708592  689469 main.go:141] libmachine: Parsing certificate...
	I0108 20:20:48.708682  689469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem
	I0108 20:20:48.708704  689469 main.go:141] libmachine: Decoding PEM data...
	I0108 20:20:48.708719  689469 main.go:141] libmachine: Parsing certificate...
	I0108 20:20:48.709108  689469 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-918006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:20:48.726040  689469 cli_runner.go:211] docker network inspect ingress-addon-legacy-918006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:20:48.726122  689469 network_create.go:281] running [docker network inspect ingress-addon-legacy-918006] to gather additional debugging logs...
	I0108 20:20:48.726143  689469 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-918006
	W0108 20:20:48.742321  689469 cli_runner.go:211] docker network inspect ingress-addon-legacy-918006 returned with exit code 1
	I0108 20:20:48.742357  689469 network_create.go:284] error running [docker network inspect ingress-addon-legacy-918006]: docker network inspect ingress-addon-legacy-918006: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-918006 not found
	I0108 20:20:48.742372  689469 network_create.go:286] output of [docker network inspect ingress-addon-legacy-918006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-918006 not found
	
	** /stderr **
	I0108 20:20:48.742473  689469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:20:48.759253  689469 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000151ff0}
	I0108 20:20:48.759292  689469 network_create.go:124] attempt to create docker network ingress-addon-legacy-918006 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 20:20:48.759352  689469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-918006 ingress-addon-legacy-918006
	I0108 20:20:48.828970  689469 network_create.go:108] docker network ingress-addon-legacy-918006 192.168.49.0/24 created
	I0108 20:20:48.829026  689469 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-918006" container
	I0108 20:20:48.829107  689469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:20:48.845608  689469 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-918006 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-918006 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:20:48.864603  689469 oci.go:103] Successfully created a docker volume ingress-addon-legacy-918006
	I0108 20:20:48.864697  689469 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-918006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-918006 --entrypoint /usr/bin/test -v ingress-addon-legacy-918006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:20:50.395043  689469 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-918006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-918006 --entrypoint /usr/bin/test -v ingress-addon-legacy-918006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.530302478s)
	I0108 20:20:50.395072  689469 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-918006
	I0108 20:20:50.395091  689469 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0108 20:20:50.395111  689469 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:20:50.395194  689469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-918006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:20:55.326490  689469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-918006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.931252616s)
	I0108 20:20:55.326522  689469 kic.go:203] duration metric: took 4.931408 seconds to extract preloaded images to volume
	W0108 20:20:55.326664  689469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:20:55.326775  689469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:20:55.392192  689469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-918006 --name ingress-addon-legacy-918006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-918006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-918006 --network ingress-addon-legacy-918006 --ip 192.168.49.2 --volume ingress-addon-legacy-918006:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:20:55.784237  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Running}}
	I0108 20:20:55.814278  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Status}}
	I0108 20:20:55.842749  689469 cli_runner.go:164] Run: docker exec ingress-addon-legacy-918006 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:20:55.921165  689469 oci.go:144] the created container "ingress-addon-legacy-918006" has a running status.
	I0108 20:20:55.921198  689469 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa...
	I0108 20:20:56.281999  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:20:56.282071  689469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:20:56.312162  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Status}}
	I0108 20:20:56.340774  689469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:20:56.340799  689469 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-918006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:20:56.432370  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Status}}
	I0108 20:20:56.460090  689469 machine.go:88] provisioning docker machine ...
	I0108 20:20:56.460121  689469 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-918006"
	I0108 20:20:56.460191  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:20:56.480044  689469 main.go:141] libmachine: Using SSH client type: native
	I0108 20:20:56.480482  689469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0108 20:20:56.480499  689469 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-918006 && echo "ingress-addon-legacy-918006" | sudo tee /etc/hostname
	I0108 20:20:56.481188  689469 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 20:20:59.631907  689469 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-918006
	
	I0108 20:20:59.632074  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:20:59.649661  689469 main.go:141] libmachine: Using SSH client type: native
	I0108 20:20:59.650078  689469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0108 20:20:59.650104  689469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-918006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-918006/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-918006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:20:59.790415  689469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:20:59.790440  689469 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-649468/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-649468/.minikube}
	I0108 20:20:59.790470  689469 ubuntu.go:177] setting up certificates
	I0108 20:20:59.790481  689469 provision.go:83] configureAuth start
	I0108 20:20:59.790546  689469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-918006
	I0108 20:20:59.812305  689469 provision.go:138] copyHostCerts
	I0108 20:20:59.812353  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem
	I0108 20:20:59.812386  689469 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem, removing ...
	I0108 20:20:59.812393  689469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem
	I0108 20:20:59.812471  689469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/ca.pem (1078 bytes)
	I0108 20:20:59.812546  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem
	I0108 20:20:59.812562  689469 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem, removing ...
	I0108 20:20:59.812566  689469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem
	I0108 20:20:59.812591  689469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/cert.pem (1123 bytes)
	I0108 20:20:59.812629  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem
	I0108 20:20:59.812644  689469 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem, removing ...
	I0108 20:20:59.812648  689469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem
	I0108 20:20:59.812670  689469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-649468/.minikube/key.pem (1679 bytes)
	I0108 20:20:59.812764  689469 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-918006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-918006]
	I0108 20:21:00.117341  689469 provision.go:172] copyRemoteCerts
	I0108 20:21:00.117427  689469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:21:00.117476  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:00.184731  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:00.308947  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:21:00.309055  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:21:00.351500  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:21:00.351581  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 20:21:00.387104  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:21:00.387221  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:21:00.422417  689469 provision.go:86] duration metric: configureAuth took 631.921114ms
	I0108 20:21:00.422456  689469 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:21:00.422700  689469 config.go:182] Loaded profile config "ingress-addon-legacy-918006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0108 20:21:00.422718  689469 machine.go:91] provisioned docker machine in 3.962610483s
	I0108 20:21:00.422725  689469 client.go:171] LocalClient.Create took 11.714255586s
	I0108 20:21:00.422739  689469 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-918006" took 11.714309075s
	I0108 20:21:00.422752  689469 start.go:300] post-start starting for "ingress-addon-legacy-918006" (driver="docker")
	I0108 20:21:00.422761  689469 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:21:00.422815  689469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:21:00.422859  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:00.442556  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:00.544452  689469 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:21:00.548823  689469 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:21:00.548897  689469 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:21:00.548914  689469 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:21:00.548926  689469 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:21:00.548937  689469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/addons for local assets ...
	I0108 20:21:00.549049  689469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-649468/.minikube/files for local assets ...
	I0108 20:21:00.549141  689469 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem -> 6548052.pem in /etc/ssl/certs
	I0108 20:21:00.549154  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem -> /etc/ssl/certs/6548052.pem
	I0108 20:21:00.549265  689469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:21:00.560100  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:21:00.589847  689469 start.go:303] post-start completed in 167.080033ms
	I0108 20:21:00.590232  689469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-918006
	I0108 20:21:00.608462  689469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/config.json ...
	I0108 20:21:00.608754  689469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:21:00.608805  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:00.626739  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:00.723634  689469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:21:00.730003  689469 start.go:128] duration metric: createHost completed in 12.02478986s
	I0108 20:21:00.730034  689469 start.go:83] releasing machines lock for "ingress-addon-legacy-918006", held for 12.024915152s
	I0108 20:21:00.730121  689469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-918006
	I0108 20:21:00.748547  689469 ssh_runner.go:195] Run: cat /version.json
	I0108 20:21:00.748604  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:00.748611  689469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:21:00.748685  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:00.768756  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:00.771223  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:01.003345  689469 ssh_runner.go:195] Run: systemctl --version
	I0108 20:21:01.010106  689469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:21:01.016090  689469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 20:21:01.046570  689469 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:21:01.046657  689469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:21:01.081724  689469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:21:01.081747  689469 start.go:475] detecting cgroup driver to use...
	I0108 20:21:01.081782  689469 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:21:01.081834  689469 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:21:01.097110  689469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:21:01.111883  689469 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:21:01.111956  689469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:21:01.128709  689469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:21:01.146937  689469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:21:01.247137  689469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:21:01.359081  689469 docker.go:233] disabling docker service ...
	I0108 20:21:01.359147  689469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:21:01.381153  689469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:21:01.395354  689469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:21:01.495797  689469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:21:01.600036  689469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:21:01.613548  689469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:21:01.633875  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0108 20:21:01.645906  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 20:21:01.657915  689469 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 20:21:01.657990  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 20:21:01.670292  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:21:01.682752  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 20:21:01.695234  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:21:01.712692  689469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:21:01.724235  689469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 20:21:01.736086  689469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:21:01.746800  689469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:21:01.757146  689469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:21:01.858808  689469 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:21:02.007779  689469 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:21:02.007913  689469 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:21:02.013381  689469 start.go:543] Will wait 60s for crictl version
	I0108 20:21:02.013505  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:02.018344  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:21:02.067884  689469 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0108 20:21:02.068050  689469 ssh_runner.go:195] Run: containerd --version
	I0108 20:21:02.096235  689469 ssh_runner.go:195] Run: containerd --version
	I0108 20:21:02.126069  689469 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.26 ...
	I0108 20:21:02.127985  689469 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-918006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:21:02.146252  689469 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:21:02.151309  689469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:21:02.164938  689469 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0108 20:21:02.165033  689469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:21:02.204931  689469 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:21:02.205034  689469 ssh_runner.go:195] Run: which lz4
	I0108 20:21:02.209514  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0108 20:21:02.209620  689469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:21:02.213966  689469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:21:02.214006  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0108 20:21:04.424092  689469 containerd.go:547] Took 2.214515 seconds to copy over tarball
	I0108 20:21:04.424182  689469 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:21:07.283674  689469 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.859457646s)
	I0108 20:21:07.283711  689469 containerd.go:554] Took 2.859593 seconds to extract the tarball
	I0108 20:21:07.283723  689469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:21:07.370415  689469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:21:07.472951  689469 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:21:07.618978  689469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:21:07.671294  689469 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:21:07.671317  689469 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 20:21:07.671358  689469 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:21:07.671562  689469 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:21:07.671645  689469 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:21:07.671709  689469 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:21:07.671774  689469 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:21:07.671851  689469 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 20:21:07.671913  689469 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:21:07.671979  689469 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 20:21:07.674220  689469 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:21:07.674725  689469 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:21:07.675058  689469 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:21:07.675111  689469 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:21:07.675390  689469 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 20:21:07.675577  689469 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:21:07.675636  689469 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:21:07.675865  689469 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 20:21:08.035928  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c"
	I0108 20:21:08.036027  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.075252  689469 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.075443  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257"
	I0108 20:21:08.075532  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.083160  689469 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.083354  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c"
	I0108 20:21:08.083450  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.089032  689469 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.089217  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7"
	I0108 20:21:08.089306  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.095015  689469 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.095133  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79"
	I0108 20:21:08.095265  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.107439  689469 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.107646  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18"
	I0108 20:21:08.107703  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.107959  689469 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.108073  689469 containerd.go:251] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03"
	I0108 20:21:08.108122  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0108 20:21:08.195390  689469 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0108 20:21:08.195549  689469 containerd.go:251] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I0108 20:21:08.195638  689469 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 20:21:08.331679  689469 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0108 20:21:08.331820  689469 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 20:21:08.331912  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:08.620381  689469 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0108 20:21:08.620427  689469 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:21:08.620509  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:08.808170  689469 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0108 20:21:08.808254  689469 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 20:21:08.808335  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:08.848235  689469 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0108 20:21:08.848346  689469 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:21:08.848424  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:08.923847  689469 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0108 20:21:08.923945  689469 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:21:08.924027  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:09.046800  689469 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0108 20:21:09.046894  689469 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:21:09.047001  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:09.046840  689469 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0108 20:21:09.047109  689469 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:21:09.047170  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:09.047361  689469 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0108 20:21:09.047395  689469 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:21:09.047400  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 20:21:09.047435  689469 ssh_runner.go:195] Run: which crictl
	I0108 20:21:09.047470  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:21:09.047525  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 20:21:09.047563  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:21:09.047583  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:21:09.191185  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:21:09.191269  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 20:21:09.191324  689469 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:21:09.191390  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0108 20:21:09.191440  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 20:21:09.191492  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0108 20:21:09.191539  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 20:21:09.191584  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 20:21:09.281413  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 20:21:09.281474  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0108 20:21:09.281430  689469 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 20:21:09.281562  689469 cache_images.go:92] LoadImages completed in 1.610229929s
	W0108 20:21:09.281663  689469 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17907-649468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0108 20:21:09.281741  689469 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:21:09.327501  689469 cni.go:84] Creating CNI manager for ""
	I0108 20:21:09.327525  689469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:21:09.327575  689469 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:21:09.327605  689469 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-918006 NodeName:ingress-addon-legacy-918006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 20:21:09.327775  689469 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-918006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:21:09.327847  689469 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-918006 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-918006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:21:09.327920  689469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 20:21:09.338602  689469 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:21:09.338715  689469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:21:09.349403  689469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0108 20:21:09.370318  689469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 20:21:09.391545  689469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0108 20:21:09.413497  689469 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:21:09.418034  689469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:21:09.431435  689469 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006 for IP: 192.168.49.2
	I0108 20:21:09.431470  689469 certs.go:190] acquiring lock for shared ca certs: {Name:mk8baa4ad3918f12788abe17f587583afd1a9c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:09.431608  689469 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key
	I0108 20:21:09.431659  689469 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key
	I0108 20:21:09.431707  689469 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.key
	I0108 20:21:09.431721  689469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt with IP's: []
	I0108 20:21:09.806002  689469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt ...
	I0108 20:21:09.806035  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: {Name:mkc7093505b9261be48c0439b87daa9c8aa2d350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:09.806256  689469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.key ...
	I0108 20:21:09.806275  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.key: {Name:mk1250ef9010d362a6b91a0a1680d10ee4fb6976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:09.806367  689469 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key.dd3b5fb2
	I0108 20:21:09.806387  689469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:21:10.210939  689469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt.dd3b5fb2 ...
	I0108 20:21:10.210976  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt.dd3b5fb2: {Name:mk47fa02aa907eb0e710525bff60e80be432c181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:10.211163  689469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key.dd3b5fb2 ...
	I0108 20:21:10.211178  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key.dd3b5fb2: {Name:mk934689af45d900a71a6b55926457a4a7e5786f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:10.211263  689469 certs.go:337] copying /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt
	I0108 20:21:10.211346  689469 certs.go:341] copying /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key
	I0108 20:21:10.211413  689469 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.key
	I0108 20:21:10.211426  689469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.crt with IP's: []
	I0108 20:21:10.881253  689469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.crt ...
	I0108 20:21:10.881280  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.crt: {Name:mk6328635a02108afa7a3d673d70af9c8b79d564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:10.881609  689469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.key ...
	I0108 20:21:10.881629  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.key: {Name:mk2cd5e4816dca125db344abd5ddcf8e9a87d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:10.882149  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:21:10.882178  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:21:10.882190  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:21:10.882202  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:21:10.882217  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:21:10.882239  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:21:10.882254  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:21:10.882266  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:21:10.882325  689469 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem (1338 bytes)
	W0108 20:21:10.882364  689469 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805_empty.pem, impossibly tiny 0 bytes
	I0108 20:21:10.882376  689469 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:21:10.882406  689469 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:21:10.882435  689469 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:21:10.882463  689469 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/home/jenkins/minikube-integration/17907-649468/.minikube/certs/key.pem (1679 bytes)
	I0108 20:21:10.882509  689469 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem (1708 bytes)
	I0108 20:21:10.882553  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem -> /usr/share/ca-certificates/654805.pem
	I0108 20:21:10.882569  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem -> /usr/share/ca-certificates/6548052.pem
	I0108 20:21:10.882584  689469 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:21:10.883197  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:21:10.912623  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 20:21:10.942483  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:21:10.973590  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:21:11.008384  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:21:11.038317  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:21:11.068459  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:21:11.098364  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 20:21:11.129349  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/certs/654805.pem --> /usr/share/ca-certificates/654805.pem (1338 bytes)
	I0108 20:21:11.159279  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/ssl/certs/6548052.pem --> /usr/share/ca-certificates/6548052.pem (1708 bytes)
	I0108 20:21:11.188327  689469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:21:11.217754  689469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:21:11.239630  689469 ssh_runner.go:195] Run: openssl version
	I0108 20:21:11.246757  689469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/654805.pem && ln -fs /usr/share/ca-certificates/654805.pem /etc/ssl/certs/654805.pem"
	I0108 20:21:11.258653  689469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/654805.pem
	I0108 20:21:11.263259  689469 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/654805.pem
	I0108 20:21:11.263322  689469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/654805.pem
	I0108 20:21:11.271928  689469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/654805.pem /etc/ssl/certs/51391683.0"
	I0108 20:21:11.283608  689469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6548052.pem && ln -fs /usr/share/ca-certificates/6548052.pem /etc/ssl/certs/6548052.pem"
	I0108 20:21:11.294867  689469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6548052.pem
	I0108 20:21:11.299445  689469 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/6548052.pem
	I0108 20:21:11.299538  689469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6548052.pem
	I0108 20:21:11.307978  689469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6548052.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:21:11.319301  689469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:21:11.330478  689469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:21:11.335180  689469 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:21:11.335253  689469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:21:11.344895  689469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:21:11.356804  689469 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:21:11.361340  689469 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:21:11.361398  689469 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-918006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-918006 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:21:11.361474  689469 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 20:21:11.361534  689469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:21:11.403080  689469 cri.go:89] found id: ""
	I0108 20:21:11.403161  689469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:21:11.414070  689469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:21:11.424908  689469 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:21:11.424986  689469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:21:11.435878  689469 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:21:11.435932  689469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:21:11.493790  689469 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 20:21:11.494001  689469 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:21:11.552957  689469 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:21:11.553091  689469 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:21:11.553152  689469 kubeadm.go:322] OS: Linux
	I0108 20:21:11.553217  689469 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:21:11.553292  689469 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:21:11.553356  689469 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:21:11.553434  689469 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:21:11.553516  689469 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:21:11.553592  689469 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:21:11.649494  689469 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:21:11.649604  689469 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:21:11.649699  689469 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:21:11.880151  689469 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:21:11.881820  689469 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:21:11.882056  689469 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:21:11.994279  689469 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:21:11.998660  689469 out.go:204]   - Generating certificates and keys ...
	I0108 20:21:11.998797  689469 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:21:11.998910  689469 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:21:12.862768  689469 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:21:13.478006  689469 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:21:13.919687  689469 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:21:14.361048  689469 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:21:14.835181  689469 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:21:14.835545  689469 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-918006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:21:15.236740  689469 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:21:15.237050  689469 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-918006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:21:15.433918  689469 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:21:15.832131  689469 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:21:16.634683  689469 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:21:16.634938  689469 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:21:17.284540  689469 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:21:17.572682  689469 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:21:17.877510  689469 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:21:18.750945  689469 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:21:18.751814  689469 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:21:18.754054  689469 out.go:204]   - Booting up control plane ...
	I0108 20:21:18.754158  689469 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:21:18.769427  689469 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:21:18.771115  689469 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:21:18.772463  689469 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:21:18.775478  689469 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:21:31.278230  689469 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502633 seconds
	I0108 20:21:31.278346  689469 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:21:31.291044  689469 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:21:31.819253  689469 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:21:31.819690  689469 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-918006 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 20:21:32.333869  689469 kubeadm.go:322] [bootstrap-token] Using token: ng3rwk.1mhhd306y7ouzpon
	I0108 20:21:32.336396  689469 out.go:204]   - Configuring RBAC rules ...
	I0108 20:21:32.336516  689469 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:21:32.342321  689469 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:21:32.352157  689469 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:21:32.357069  689469 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:21:32.360109  689469 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:21:32.363599  689469 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:21:32.379072  689469 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:21:32.662595  689469 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:21:32.799678  689469 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:21:32.801426  689469 kubeadm.go:322] 
	I0108 20:21:32.801511  689469 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:21:32.801518  689469 kubeadm.go:322] 
	I0108 20:21:32.801591  689469 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:21:32.801596  689469 kubeadm.go:322] 
	I0108 20:21:32.801620  689469 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:21:32.801981  689469 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:21:32.802061  689469 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:21:32.802070  689469 kubeadm.go:322] 
	I0108 20:21:32.802120  689469 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:21:32.802195  689469 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:21:32.802264  689469 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:21:32.802275  689469 kubeadm.go:322] 
	I0108 20:21:32.802385  689469 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:21:32.802461  689469 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:21:32.802469  689469 kubeadm.go:322] 
	I0108 20:21:32.802549  689469 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ng3rwk.1mhhd306y7ouzpon \
	I0108 20:21:32.802651  689469 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e7aa231785652d24090e2cd097637f46032eb43e585bbef4633ff038c4bd0902 \
	I0108 20:21:32.802696  689469 kubeadm.go:322]     --control-plane 
	I0108 20:21:32.802702  689469 kubeadm.go:322] 
	I0108 20:21:32.802787  689469 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:21:32.802800  689469 kubeadm.go:322] 
	I0108 20:21:32.802893  689469 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ng3rwk.1mhhd306y7ouzpon \
	I0108 20:21:32.803014  689469 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e7aa231785652d24090e2cd097637f46032eb43e585bbef4633ff038c4bd0902 
	I0108 20:21:32.806260  689469 kubeadm.go:322] W0108 20:21:11.493223    1086 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 20:21:32.806480  689469 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:21:32.806584  689469 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:21:32.806707  689469 kubeadm.go:322] W0108 20:21:18.769311    1086 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:21:32.806827  689469 kubeadm.go:322] W0108 20:21:18.771023    1086 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:21:32.806845  689469 cni.go:84] Creating CNI manager for ""
	I0108 20:21:32.806854  689469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:21:32.809379  689469 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:21:32.811797  689469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:21:32.817887  689469 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0108 20:21:32.817905  689469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:21:32.841600  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:21:33.356652  689469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:21:33.356795  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:33.356882  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=ingress-addon-legacy-918006 minikube.k8s.io/updated_at=2024_01_08T20_21_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:33.390995  689469 ops.go:34] apiserver oom_adj: -16
	I0108 20:21:33.500983  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:34.003339  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:34.502008  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:35.001078  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:35.501116  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:36.001144  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:36.501689  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:37.003324  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:37.501169  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:38.002023  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:38.502059  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:39.002768  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:39.501718  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:40.001107  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:40.501725  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:41.001206  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:41.501626  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:42.010693  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:42.501643  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:43.001178  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:43.501146  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:44.002175  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:44.501536  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:45.007894  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:45.501785  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:46.001159  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:46.501246  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:47.003184  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:47.501755  689469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:21:47.611277  689469 kubeadm.go:1088] duration metric: took 14.254541426s to wait for elevateKubeSystemPrivileges.
	I0108 20:21:47.611312  689469 kubeadm.go:406] StartCluster complete in 36.249923689s
	I0108 20:21:47.611332  689469 settings.go:142] acquiring lock: {Name:mkb63cd96d7a856f465b0592d8a592dc849b8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:47.611402  689469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:21:47.612088  689469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-649468/kubeconfig: {Name:mk40e5900c8ed31a9e7a0515010236c17752c8ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:21:47.612306  689469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:21:47.612602  689469 config.go:182] Loaded profile config "ingress-addon-legacy-918006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0108 20:21:47.612716  689469 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:21:47.612788  689469 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-918006"
	I0108 20:21:47.612813  689469 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-918006"
	I0108 20:21:47.612868  689469 host.go:66] Checking if "ingress-addon-legacy-918006" exists ...
	I0108 20:21:47.612886  689469 kapi.go:59] client config for ingress-addon-legacy-918006: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.key", CAFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:21:47.613387  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Status}}
	I0108 20:21:47.614191  689469 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:21:47.614601  689469 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-918006"
	I0108 20:21:47.614625  689469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-918006"
	I0108 20:21:47.614929  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Status}}
	I0108 20:21:47.675340  689469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:21:47.676262  689469 kapi.go:59] client config for ingress-addon-legacy-918006: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.key", CAFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:21:47.678295  689469 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-918006"
	I0108 20:21:47.678334  689469 host.go:66] Checking if "ingress-addon-legacy-918006" exists ...
	I0108 20:21:47.678770  689469 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-918006 --format={{.State.Status}}
	I0108 20:21:47.679002  689469 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:21:47.679037  689469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:21:47.679091  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:47.712932  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:47.725640  689469 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:21:47.725668  689469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:21:47.725731  689469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-918006
	I0108 20:21:47.752589  689469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/ingress-addon-legacy-918006/id_rsa Username:docker}
	I0108 20:21:47.919618  689469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:21:47.924600  689469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:21:47.980879  689469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:21:48.120625  689469 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-918006" context rescaled to 1 replicas
	I0108 20:21:48.120669  689469 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 20:21:48.123467  689469 out.go:177] * Verifying Kubernetes components...
	I0108 20:21:48.125831  689469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:21:48.584782  689469 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 20:21:48.680083  689469 kapi.go:59] client config for ingress-addon-legacy-918006: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.key", CAFile:"/home/jenkins/minikube-integration/17907-649468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:21:48.680436  689469 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-918006" to be "Ready" ...
	I0108 20:21:48.687446  689469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 20:21:48.689471  689469 addons.go:508] enable addons completed in 1.076743631s: enabled=[default-storageclass storage-provisioner]
	I0108 20:21:48.690788  689469 node_ready.go:49] node "ingress-addon-legacy-918006" has status "Ready":"True"
	I0108 20:21:48.690847  689469 node_ready.go:38] duration metric: took 10.369936ms waiting for node "ingress-addon-legacy-918006" to be "Ready" ...
	I0108 20:21:48.690872  689469 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:21:48.701994  689469 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-l54fd" in "kube-system" namespace to be "Ready" ...
	I0108 20:21:50.708508  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:21:53.207172  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:21:55.207972  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:21:57.208405  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:21:59.708553  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:22:02.208498  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:22:04.707900  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:22:07.208508  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:22:09.708327  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:22:11.708576  689469 pod_ready.go:102] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"False"
	I0108 20:22:13.708354  689469 pod_ready.go:92] pod "coredns-66bff467f8-l54fd" in "kube-system" namespace has status "Ready":"True"
	I0108 20:22:13.708388  689469 pod_ready.go:81] duration metric: took 25.006321508s waiting for pod "coredns-66bff467f8-l54fd" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.708400  689469 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-qt7pc" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.710344  689469 pod_ready.go:97] error getting pod "coredns-66bff467f8-qt7pc" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-qt7pc" not found
	I0108 20:22:13.710371  689469 pod_ready.go:81] duration metric: took 1.964261ms waiting for pod "coredns-66bff467f8-qt7pc" in "kube-system" namespace to be "Ready" ...
	E0108 20:22:13.710382  689469 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-qt7pc" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-qt7pc" not found
	I0108 20:22:13.710389  689469 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.715165  689469 pod_ready.go:92] pod "etcd-ingress-addon-legacy-918006" in "kube-system" namespace has status "Ready":"True"
	I0108 20:22:13.715190  689469 pod_ready.go:81] duration metric: took 4.7935ms waiting for pod "etcd-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.715203  689469 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.720471  689469 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-918006" in "kube-system" namespace has status "Ready":"True"
	I0108 20:22:13.720538  689469 pod_ready.go:81] duration metric: took 5.324935ms waiting for pod "kube-apiserver-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.720555  689469 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.725471  689469 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-918006" in "kube-system" namespace has status "Ready":"True"
	I0108 20:22:13.725498  689469 pod_ready.go:81] duration metric: took 4.934333ms waiting for pod "kube-controller-manager-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.725511  689469 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gwnp6" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.903153  689469 request.go:629] Waited for 175.254731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-918006
	I0108 20:22:13.905899  689469 pod_ready.go:92] pod "kube-proxy-gwnp6" in "kube-system" namespace has status "Ready":"True"
	I0108 20:22:13.905924  689469 pod_ready.go:81] duration metric: took 180.405406ms waiting for pod "kube-proxy-gwnp6" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:13.905936  689469 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:14.103448  689469 request.go:629] Waited for 197.377791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-918006
	I0108 20:22:14.303538  689469 request.go:629] Waited for 197.30424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-918006
	I0108 20:22:14.306360  689469 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-918006" in "kube-system" namespace has status "Ready":"True"
	I0108 20:22:14.306438  689469 pod_ready.go:81] duration metric: took 400.469428ms waiting for pod "kube-scheduler-ingress-addon-legacy-918006" in "kube-system" namespace to be "Ready" ...
	I0108 20:22:14.306482  689469 pod_ready.go:38] duration metric: took 25.615584793s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:22:14.306504  689469 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:22:14.306573  689469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:22:14.320191  689469 api_server.go:72] duration metric: took 26.199461175s to wait for apiserver process to appear ...
	I0108 20:22:14.320218  689469 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:22:14.320247  689469 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 20:22:14.329288  689469 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 20:22:14.330200  689469 api_server.go:141] control plane version: v1.18.20
	I0108 20:22:14.330227  689469 api_server.go:131] duration metric: took 10.001858ms to wait for apiserver health ...
	I0108 20:22:14.330236  689469 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:22:14.503600  689469 request.go:629] Waited for 173.284654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:22:14.516466  689469 system_pods.go:59] 8 kube-system pods found
	I0108 20:22:14.516504  689469 system_pods.go:61] "coredns-66bff467f8-l54fd" [435923e4-13e0-4750-97ef-a19e5bf5abdb] Running
	I0108 20:22:14.516511  689469 system_pods.go:61] "etcd-ingress-addon-legacy-918006" [ad205dd3-d7c2-4eee-8b21-4fa95a809d7c] Running
	I0108 20:22:14.516516  689469 system_pods.go:61] "kindnet-nvzkh" [752d81d6-5400-4fd4-8a2d-fc6f7b7d3a37] Running
	I0108 20:22:14.516521  689469 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-918006" [b6c95758-8147-40da-a7de-6518f332ecfa] Running
	I0108 20:22:14.516531  689469 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-918006" [bc93c6e4-9f8a-46d7-89ff-73993719de49] Running
	I0108 20:22:14.516536  689469 system_pods.go:61] "kube-proxy-gwnp6" [e3917260-8e7e-4751-959a-714464f5e81e] Running
	I0108 20:22:14.516541  689469 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-918006" [052a473c-b026-4d47-af54-465d8037319f] Running
	I0108 20:22:14.516546  689469 system_pods.go:61] "storage-provisioner" [bb66c9d9-b931-4f71-b6eb-0a729850bb20] Running
	I0108 20:22:14.516556  689469 system_pods.go:74] duration metric: took 186.311441ms to wait for pod list to return data ...
	I0108 20:22:14.516571  689469 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:22:14.703999  689469 request.go:629] Waited for 187.322896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:22:14.706529  689469 default_sa.go:45] found service account: "default"
	I0108 20:22:14.706559  689469 default_sa.go:55] duration metric: took 189.98033ms for default service account to be created ...
	I0108 20:22:14.706569  689469 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:22:14.904012  689469 request.go:629] Waited for 197.35896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:22:14.910676  689469 system_pods.go:86] 8 kube-system pods found
	I0108 20:22:14.910720  689469 system_pods.go:89] "coredns-66bff467f8-l54fd" [435923e4-13e0-4750-97ef-a19e5bf5abdb] Running
	I0108 20:22:14.910727  689469 system_pods.go:89] "etcd-ingress-addon-legacy-918006" [ad205dd3-d7c2-4eee-8b21-4fa95a809d7c] Running
	I0108 20:22:14.910733  689469 system_pods.go:89] "kindnet-nvzkh" [752d81d6-5400-4fd4-8a2d-fc6f7b7d3a37] Running
	I0108 20:22:14.910743  689469 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-918006" [b6c95758-8147-40da-a7de-6518f332ecfa] Running
	I0108 20:22:14.910749  689469 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-918006" [bc93c6e4-9f8a-46d7-89ff-73993719de49] Running
	I0108 20:22:14.910753  689469 system_pods.go:89] "kube-proxy-gwnp6" [e3917260-8e7e-4751-959a-714464f5e81e] Running
	I0108 20:22:14.910758  689469 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-918006" [052a473c-b026-4d47-af54-465d8037319f] Running
	I0108 20:22:14.910764  689469 system_pods.go:89] "storage-provisioner" [bb66c9d9-b931-4f71-b6eb-0a729850bb20] Running
	I0108 20:22:14.910778  689469 system_pods.go:126] duration metric: took 204.184512ms to wait for k8s-apps to be running ...
	I0108 20:22:14.910803  689469 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:22:14.910882  689469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:22:14.924923  689469 system_svc.go:56] duration metric: took 14.121656ms WaitForService to wait for kubelet.
	I0108 20:22:14.925017  689469 kubeadm.go:581] duration metric: took 26.804265826s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:22:14.925045  689469 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:22:15.103557  689469 request.go:629] Waited for 178.387559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0108 20:22:15.106972  689469 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:22:15.107011  689469 node_conditions.go:123] node cpu capacity is 2
	I0108 20:22:15.107026  689469 node_conditions.go:105] duration metric: took 181.974675ms to run NodePressure ...
	I0108 20:22:15.107040  689469 start.go:228] waiting for startup goroutines ...
	I0108 20:22:15.107047  689469 start.go:233] waiting for cluster config update ...
	I0108 20:22:15.107063  689469 start.go:242] writing updated cluster config ...
	I0108 20:22:15.107404  689469 ssh_runner.go:195] Run: rm -f paused
	I0108 20:22:15.180627  689469 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 20:22:15.183537  689469 out.go:177] 
	W0108 20:22:15.185807  689469 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 20:22:15.187865  689469 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 20:22:15.190212  689469 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-918006" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	38e52378f295d       dd1b12fcb6097       19 seconds ago       Exited              hello-world-app           2                   ec09891ce6a80       hello-world-app-5f5d8b66bb-fw2jg
	b987d57709bbc       74077e780ec71       43 seconds ago       Running             nginx                     0                   d2c627481e39d       nginx
	a21b4ea1b4874       d7f0cba3aa5bf       About a minute ago   Exited              controller                0                   b2bf4dfdd0ccd       ingress-nginx-controller-7fcf777cb7-cb746
	741ff58595ad7       a883f7fc35610       About a minute ago   Exited              patch                     0                   3991901ca292b       ingress-nginx-admission-patch-l4t4k
	42888d91a5da2       a883f7fc35610       About a minute ago   Exited              create                    0                   4102bcbb6ca22       ingress-nginx-admission-create-gb47x
	eb4f1383b3399       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   250e309f20849       coredns-66bff467f8-l54fd
	91a620e8ecd1e       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   e9c66c5185f57       storage-provisioner
	8b38694cae024       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   040641605923c       kindnet-nvzkh
	4d88081a9adc3       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   07d2cc2fab1bb       kube-proxy-gwnp6
	d087d6da7afee       ab707b0a0ea33       2 minutes ago        Running             etcd                      0                   6232f8437d692       etcd-ingress-addon-legacy-918006
	e823ecdaa7315       68a4fac29a865       2 minutes ago        Running             kube-controller-manager   0                   05f3580208117       kube-controller-manager-ingress-addon-legacy-918006
	1c799060fa297       095f37015706d       2 minutes ago        Running             kube-scheduler            0                   b995c0fbd31a5       kube-scheduler-ingress-addon-legacy-918006
	e474c6a4ca0ff       2694cf044d665       2 minutes ago        Running             kube-apiserver            0                   4810331e5fc51       kube-apiserver-ingress-addon-legacy-918006
	
	
	==> containerd <==
	Jan 08 20:23:08 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:08.217068885Z" level=info msg="TearDown network for sandbox \"4eac3b1ad3d09476e16985fd21e11505a329ba922333a6ebd0e685e78c9f6592\" successfully"
	Jan 08 20:23:08 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:08.217110690Z" level=info msg="StopPodSandbox for \"4eac3b1ad3d09476e16985fd21e11505a329ba922333a6ebd0e685e78c9f6592\" returns successfully"
	Jan 08 20:23:17 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:17.150684421Z" level=info msg="StopContainer for \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" with timeout 2 (s)"
	Jan 08 20:23:17 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:17.151415608Z" level=info msg="Stop container \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" with signal terminated"
	Jan 08 20:23:17 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:17.190902010Z" level=info msg="StopContainer for \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" with timeout 2 (s)"
	Jan 08 20:23:17 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:17.197121355Z" level=info msg="Skipping the sending of signal terminated to container \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" because a prior stop with timeout>0 request already sent the signal"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.169172863Z" level=info msg="Kill container \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\""
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.198073346Z" level=info msg="Kill container \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\""
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.246736365Z" level=info msg="shim disconnected" id=a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.246800480Z" level=warning msg="cleaning up after shim disconnected" id=a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf namespace=k8s.io
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.246811499Z" level=info msg="cleaning up dead shim"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.257280415Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:23:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4634 runtime=io.containerd.runc.v2\n"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.261753581Z" level=info msg="StopContainer for \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" returns successfully"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.262333573Z" level=info msg="StopContainer for \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" returns successfully"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.262924863Z" level=info msg="StopPodSandbox for \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\""
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.262991505Z" level=info msg="Container to stop \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.263209940Z" level=info msg="StopPodSandbox for \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\""
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.263245485Z" level=info msg="Container to stop \"a21b4ea1b48744d0fb952127019b3336f2ee4eb0c02b14a4e765ed7af85756cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.300052677Z" level=info msg="shim disconnected" id=b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.301050783Z" level=warning msg="cleaning up after shim disconnected" id=b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526 namespace=k8s.io
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.301083004Z" level=info msg="cleaning up dead shim"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.311456396Z" level=warning msg="cleanup warnings time=\"2024-01-08T20:23:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4673 runtime=io.containerd.runc.v2\n"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.360128064Z" level=error msg="StopPodSandbox for \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\" failed" error="failed to destroy network for sandbox \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-5951103a5c1308dd94e5e --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.393971470Z" level=info msg="TearDown network for sandbox \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\" successfully"
	Jan 08 20:23:19 ingress-addon-legacy-918006 containerd[824]: time="2024-01-08T20:23:19.394160416Z" level=info msg="StopPodSandbox for \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\" returns successfully"
	
	
	==> coredns [eb4f1383b339912705301fe27c534d0551d98bb0a0ad23a63cfb1cf7063de5dd] <==
	[INFO] 10.244.0.5:46807 - 42471 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034691s
	[INFO] 10.244.0.5:46807 - 37944 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034043s
	[INFO] 10.244.0.5:50028 - 17034 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.01080424s
	[INFO] 10.244.0.5:50028 - 61032 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000155667s
	[INFO] 10.244.0.5:46807 - 54812 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.007696988s
	[INFO] 10.244.0.5:46807 - 18853 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004089834s
	[INFO] 10.244.0.5:46807 - 16689 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060652s
	[INFO] 10.244.0.5:56577 - 61999 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000108372s
	[INFO] 10.244.0.5:49508 - 48105 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000026141s
	[INFO] 10.244.0.5:49508 - 30107 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086408s
	[INFO] 10.244.0.5:56577 - 10518 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035175s
	[INFO] 10.244.0.5:56577 - 64840 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048049s
	[INFO] 10.244.0.5:49508 - 19926 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030112s
	[INFO] 10.244.0.5:49508 - 8343 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037219s
	[INFO] 10.244.0.5:56577 - 3144 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026691s
	[INFO] 10.244.0.5:49508 - 36990 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036866s
	[INFO] 10.244.0.5:56577 - 4199 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025953s
	[INFO] 10.244.0.5:49508 - 60396 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034232s
	[INFO] 10.244.0.5:56577 - 46326 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028382s
	[INFO] 10.244.0.5:49508 - 44884 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001772476s
	[INFO] 10.244.0.5:56577 - 42663 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001726938s
	[INFO] 10.244.0.5:49508 - 17004 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000959246s
	[INFO] 10.244.0.5:56577 - 32320 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00114862s
	[INFO] 10.244.0.5:56577 - 21458 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042051s
	[INFO] 10.244.0.5:49508 - 13964 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002226s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-918006
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-918006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=ingress-addon-legacy-918006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_21_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:21:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-918006
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:23:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:23:06 +0000   Mon, 08 Jan 2024 20:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:23:06 +0000   Mon, 08 Jan 2024 20:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:23:06 +0000   Mon, 08 Jan 2024 20:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:23:06 +0000   Mon, 08 Jan 2024 20:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-918006
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 479d5b4604ec46658fd566bda4949d58
	  System UUID:                16a56342-e1f9-4581-b4c8-5527c41fb043
	  Boot ID:                    cf8959e1-1119-4140-86a9-5e54dd11ba57
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-fw2jg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 coredns-66bff467f8-l54fd                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     98s
	  kube-system                 etcd-ingress-addon-legacy-918006                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-nvzkh                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      98s
	  kube-system                 kube-apiserver-ingress-addon-legacy-918006             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-918006    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-proxy-gwnp6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-scheduler-ingress-addon-legacy-918006             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  2m4s (x5 over 2m4s)  kubelet     Node ingress-addon-legacy-918006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x5 over 2m4s)  kubelet     Node ingress-addon-legacy-918006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x4 over 2m4s)  kubelet     Node ingress-addon-legacy-918006 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s                 kubelet     Node ingress-addon-legacy-918006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet     Node ingress-addon-legacy-918006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s                 kubelet     Node ingress-addon-legacy-918006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                99s                  kubelet     Node ingress-addon-legacy-918006 status is now: NodeReady
	  Normal  Starting                 97s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001461] FS-Cache: O-key=[8] '9e3c5c0100000000'
	[  +0.000871] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001160] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000767cf050
	[  +0.001241] FS-Cache: N-key=[8] '9e3c5c0100000000'
	[  +0.003438] FS-Cache: Duplicate cookie detected
	[  +0.000912] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001223] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000b26db3f5
	[  +0.001492] FS-Cache: O-key=[8] '9e3c5c0100000000'
	[  +0.000917] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001330] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000002bc005a6
	[  +0.001402] FS-Cache: N-key=[8] '9e3c5c0100000000'
	[  +2.715769] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001189] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000480ec78d
	[  +0.001217] FS-Cache: O-key=[8] '9d3c5c0100000000'
	[  +0.000794] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001063] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000000ed85972
	[  +0.001251] FS-Cache: N-key=[8] '9d3c5c0100000000'
	[  +0.415064] FS-Cache: Duplicate cookie detected
	[  +0.001030] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001199] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000c061463b
	[  +0.001269] FS-Cache: O-key=[8] 'a33c5c0100000000'
	[  +0.000875] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001202] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000812956ec
	[  +0.001223] FS-Cache: N-key=[8] 'a33c5c0100000000'
	
	
	==> etcd [d087d6da7afeefc3eb90ada6f7cd083bb154807813f4b19634b6860412504367] <==
	raft2024/01/08 20:21:23 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/08 20:21:23 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/08 20:21:23 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/08 20:21:23 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 20:21:24.189073 W | auth: simple token is not cryptographically signed
	2024-01-08 20:21:24.822527 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 20:21:24.829022 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 20:21:24.829181 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 20:21:24.829381 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-08 20:21:24.829590 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 20:21:24 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 20:21:24.829987 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2024/01/08 20:21:25 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/08 20:21:25 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/08 20:21:25 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/08 20:21:25 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/08 20:21:25 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-08 20:21:25.489691 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 20:21:25.490456 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 20:21:25.490748 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 20:21:25.490880 I | etcdserver: published {Name:ingress-addon-legacy-918006 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-08 20:21:25.490965 I | embed: ready to serve client requests
	2024-01-08 20:21:25.492490 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-08 20:21:25.495520 I | embed: ready to serve client requests
	2024-01-08 20:21:25.496809 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 20:23:25 up  3:05,  0 users,  load average: 0.79, 1.16, 1.36
	Linux ingress-addon-legacy-918006 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [8b38694cae024ac41c8d948628a601febb3d8f0e9808e40742a16c3e9a0e3b7f] <==
	I0108 20:21:50.002923       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0108 20:21:50.003064       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:21:50.003074       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:21:50.003088       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:21:50.402539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:21:50.402703       1 main.go:227] handling current node
	I0108 20:22:00.506424       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:00.506452       1 main.go:227] handling current node
	I0108 20:22:10.516699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:10.516730       1 main.go:227] handling current node
	I0108 20:22:20.520463       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:20.520493       1 main.go:227] handling current node
	I0108 20:22:30.532460       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:30.532485       1 main.go:227] handling current node
	I0108 20:22:40.536146       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:40.536176       1 main.go:227] handling current node
	I0108 20:22:50.548492       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:50.548519       1 main.go:227] handling current node
	I0108 20:23:00.551887       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:00.551916       1 main.go:227] handling current node
	I0108 20:23:10.563944       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:10.563973       1 main.go:227] handling current node
	I0108 20:23:20.570972       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:20.571003       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e474c6a4ca0ffb8b7241e19a55f9bec7f94ceb2f009ae838499f3e18943030f6] <==
	I0108 20:21:29.417987       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I0108 20:21:29.418086       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 20:21:29.438134       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 20:21:29.438188       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 20:21:29.438235       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:21:29.438254       1 cache.go:39] Caches are synced for autoregister controller
	I0108 20:21:29.438559       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 20:21:30.309156       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 20:21:30.309189       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 20:21:30.317756       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 20:21:30.325062       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:21:30.325085       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 20:21:30.734363       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:21:30.784316       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 20:21:30.887810       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0108 20:21:30.888847       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 20:21:30.892746       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:21:31.180024       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:21:31.670508       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 20:21:32.640481       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 20:21:32.775748       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 20:21:47.111212       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 20:21:47.215946       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 20:22:16.116122       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 20:22:39.827728       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [e823ecdaa731560b90660f3c1a3a3587598d811cf8ce124f3099c524b73aec05] <==
	I0108 20:21:47.300700       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"72b36996-7b8f-44cc-b901-c3951659496b", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gwnp6
	I0108 20:21:47.358487       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"89596428-9de9-4f78-9f48-482b380a93ec", APIVersion:"apps/v1", ResourceVersion:"235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-nvzkh
	I0108 20:21:47.417130       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0108 20:21:47.517350       1 shared_informer.go:230] Caches are synced for stateful set 
	I0108 20:21:47.520000       1 shared_informer.go:230] Caches are synced for attach detach 
	I0108 20:21:47.523872       1 shared_informer.go:230] Caches are synced for expand 
	I0108 20:21:47.573016       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0108 20:21:47.586236       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:21:47.628562       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:21:47.668312       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0108 20:21:47.677292       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:21:47.677323       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 20:21:47.716200       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2c635d3d-a9c9-427d-b3a2-1501aea66029", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0108 20:21:47.747277       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dd8f232a-d369-4c4d-900f-64555c2af6e6", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-qt7pc
	I0108 20:21:47.830407       1 request.go:621] Throttling request took 1.049680365s, request: GET:https://control-plane.minikube.internal:8443/apis/policy/v1beta1?timeout=32s
	I0108 20:21:48.273964       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0108 20:21:48.274007       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:22:16.100272       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"25db63a1-499a-4e1b-9f1a-85fe33068aa1", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 20:22:16.130640       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"8ecbc5cf-9194-4427-843b-6fee6aae6514", APIVersion:"apps/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-cb746
	I0108 20:22:16.133593       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0875d539-0068-4078-b50a-8212931a18e0", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-gb47x
	I0108 20:22:16.218070       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"39d3738e-e8ec-4287-82b8-bfa0be652523", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-l4t4k
	I0108 20:22:18.356070       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0875d539-0068-4078-b50a-8212931a18e0", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:22:18.366841       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"39d3738e-e8ec-4287-82b8-bfa0be652523", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:22:48.702363       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ccc12e1c-440a-4343-b45c-5a973ebc705c", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 20:22:48.731934       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"3168d7b0-71cd-418d-bf38-856cdc4ca419", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-fw2jg
	
	
	==> kube-proxy [4d88081a9adc34568d2cbb517a29bd4b2bfa37a02b1db1168d6f419169c38170] <==
	W0108 20:21:48.510079       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 20:21:48.524757       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0108 20:21:48.524813       1 server_others.go:186] Using iptables Proxier.
	I0108 20:21:48.525249       1 server.go:583] Version: v1.18.20
	I0108 20:21:48.528576       1 config.go:133] Starting endpoints config controller
	I0108 20:21:48.528595       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 20:21:48.528665       1 config.go:315] Starting service config controller
	I0108 20:21:48.528669       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 20:21:48.641692       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0108 20:21:48.644161       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [1c799060fa297d915717c610df7c66f43e256f3f1d2beac32b397b66c35fcf4a] <==
	I0108 20:21:29.477653       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:21:29.478198       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 20:21:29.484046       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:21:29.485191       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:21:29.485257       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:21:29.485361       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:21:29.485413       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:21:29.485585       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:21:29.485636       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:21:29.485682       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:21:29.485727       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:21:29.485800       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:21:29.485855       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:21:29.486022       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:21:30.324939       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:21:30.326617       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:21:30.363899       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:21:30.406555       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:21:30.517624       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:21:30.526754       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:21:30.535082       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:21:30.535209       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:21:30.541272       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0108 20:21:32.677928       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0108 20:21:47.298298       1 factory.go:503] pod: kube-system/coredns-66bff467f8-qt7pc is already present in the active queue
	
	
	==> kubelet <==
	Jan 08 20:22:53 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:22:53.466369    1623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e5a5f1bd04a736cfd8469eb7675e1b6071a569ec6ad9cd1ef614aff0a6c95dd
	Jan 08 20:22:53 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:22:53.466610    1623 pod_workers.go:191] Error syncing pod 5f0722fe-1630-483f-a8f0-bf3d54e3b123 ("hello-world-app-5f5d8b66bb-fw2jg_default(5f0722fe-1630-483f-a8f0-bf3d54e3b123)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fw2jg_default(5f0722fe-1630-483f-a8f0-bf3d54e3b123)"
	Jan 08 20:23:04 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:04.672016    1623 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-vndb2" (UniqueName: "kubernetes.io/secret/147971ef-9bff-497c-9742-eb06e653f83f-minikube-ingress-dns-token-vndb2") pod "147971ef-9bff-497c-9742-eb06e653f83f" (UID: "147971ef-9bff-497c-9742-eb06e653f83f")
	Jan 08 20:23:04 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:04.676819    1623 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147971ef-9bff-497c-9742-eb06e653f83f-minikube-ingress-dns-token-vndb2" (OuterVolumeSpecName: "minikube-ingress-dns-token-vndb2") pod "147971ef-9bff-497c-9742-eb06e653f83f" (UID: "147971ef-9bff-497c-9742-eb06e653f83f"). InnerVolumeSpecName "minikube-ingress-dns-token-vndb2". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:23:04 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:04.772844    1623 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-vndb2" (UniqueName: "kubernetes.io/secret/147971ef-9bff-497c-9742-eb06e653f83f-minikube-ingress-dns-token-vndb2") on node "ingress-addon-legacy-918006" DevicePath ""
	Jan 08 20:23:05 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:05.213208    1623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e5a5f1bd04a736cfd8469eb7675e1b6071a569ec6ad9cd1ef614aff0a6c95dd
	Jan 08 20:23:05 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:05.490061    1623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0e5a5f1bd04a736cfd8469eb7675e1b6071a569ec6ad9cd1ef614aff0a6c95dd
	Jan 08 20:23:05 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:05.490445    1623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 38e52378f295d416a32b5c1c3a1d7e8f1295a559ba863d59f62214a08e7ffb3c
	Jan 08 20:23:05 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:05.490714    1623 pod_workers.go:191] Error syncing pod 5f0722fe-1630-483f-a8f0-bf3d54e3b123 ("hello-world-app-5f5d8b66bb-fw2jg_default(5f0722fe-1630-483f-a8f0-bf3d54e3b123)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fw2jg_default(5f0722fe-1630-483f-a8f0-bf3d54e3b123)"
	Jan 08 20:23:06 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:06.495196    1623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0d127686af6dddf9edd01e4b7c43e5fd7c6e9018780ffe7f7ed18b681e9d368c
	Jan 08 20:23:17 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:17.159656    1623 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cb746.17a878e8159d3c5b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cb746", UID:"a86cfe96-8ba4-4b70-8036-0021878af2ae", APIVersion:"v1", ResourceVersion:"493", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-918006"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f344d48f26a5b, ext:104567163952, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f344d48f26a5b, ext:104567163952, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cb746.17a878e8159d3c5b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:23:17 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:17.201274    1623 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cb746.17a878e8159d3c5b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cb746", UID:"a86cfe96-8ba4-4b70-8036-0021878af2ae", APIVersion:"v1", ResourceVersion:"493", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-918006"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f344d48f26a5b, ext:104567163952, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f344d4b30910b, ext:104604791512, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cb746.17a878e8159d3c5b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:23:17 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:17.217926    1623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 38e52378f295d416a32b5c1c3a1d7e8f1295a559ba863d59f62214a08e7ffb3c
	Jan 08 20:23:17 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:17.226140    1623 pod_workers.go:191] Error syncing pod 5f0722fe-1630-483f-a8f0-bf3d54e3b123 ("hello-world-app-5f5d8b66bb-fw2jg_default(5f0722fe-1630-483f-a8f0-bf3d54e3b123)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fw2jg_default(5f0722fe-1630-483f-a8f0-bf3d54e3b123)"
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:19.311893    1623 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-bpj44" (UniqueName: "kubernetes.io/secret/a86cfe96-8ba4-4b70-8036-0021878af2ae-ingress-nginx-token-bpj44") pod "a86cfe96-8ba4-4b70-8036-0021878af2ae" (UID: "a86cfe96-8ba4-4b70-8036-0021878af2ae")
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:19.311943    1623 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a86cfe96-8ba4-4b70-8036-0021878af2ae-webhook-cert") pod "a86cfe96-8ba4-4b70-8036-0021878af2ae" (UID: "a86cfe96-8ba4-4b70-8036-0021878af2ae")
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:19.320384    1623 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86cfe96-8ba4-4b70-8036-0021878af2ae-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a86cfe96-8ba4-4b70-8036-0021878af2ae" (UID: "a86cfe96-8ba4-4b70-8036-0021878af2ae"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:19.321225    1623 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86cfe96-8ba4-4b70-8036-0021878af2ae-ingress-nginx-token-bpj44" (OuterVolumeSpecName: "ingress-nginx-token-bpj44") pod "a86cfe96-8ba4-4b70-8036-0021878af2ae" (UID: "a86cfe96-8ba4-4b70-8036-0021878af2ae"). InnerVolumeSpecName "ingress-nginx-token-bpj44". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:19.360482    1623 remote_runtime.go:128] StopPodSandbox "b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526": plugin type="portmap" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-5951103a5c1308dd94e5e --wait]: exit status 1: iptables: No chain/target/match by that name.
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:19.360561    1623 kuberuntime_manager.go:912] Failed to stop sandbox {"containerd" "b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526"}
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: E0108 20:23:19.360603    1623 kubelet_pods.go:1235] Failed killing the pod "ingress-nginx-controller-7fcf777cb7-cb746": failed to "KillPodSandbox" for "a86cfe96-8ba4-4b70-8036-0021878af2ae" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-5951103a5c1308dd94e5e --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:19.412706    1623 reconciler.go:319] Volume detached for volume "ingress-nginx-token-bpj44" (UniqueName: "kubernetes.io/secret/a86cfe96-8ba4-4b70-8036-0021878af2ae-ingress-nginx-token-bpj44") on node "ingress-addon-legacy-918006" DevicePath ""
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: I0108 20:23:19.412767    1623 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a86cfe96-8ba4-4b70-8036-0021878af2ae-webhook-cert") on node "ingress-addon-legacy-918006" DevicePath ""
	Jan 08 20:23:19 ingress-addon-legacy-918006 kubelet[1623]: W0108 20:23:19.522635    1623 pod_container_deletor.go:77] Container "b2bf4dfdd0ccdedc7c0910d99d90f9539d4caf6023393353db94e698d5961526" not found in pod's containers
	Jan 08 20:23:20 ingress-addon-legacy-918006 kubelet[1623]: W0108 20:23:20.219665    1623 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/a86cfe96-8ba4-4b70-8036-0021878af2ae/volumes" does not exist
	
	
	==> storage-provisioner [91a620e8ecd1e74b3820ec9e45ff99ff2897272539f0bea36a6be74cdebcd9ba] <==
	I0108 20:21:51.189397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:21:51.207846       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:21:51.208099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:21:51.217979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:21:51.218350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-918006_b086ea04-0c37-4579-8a41-92414b8f13cf!
	I0108 20:21:51.218162       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c63d20b-1719-4edc-9ab0-703b3c478a65", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-918006_b086ea04-0c37-4579-8a41-92414b8f13cf became leader
	I0108 20:21:51.318543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-918006_b086ea04-0c37-4579-8a41-92414b8f13cf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-918006 -n ingress-addon-legacy-918006
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-918006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (59.72s)

                                                
                                    

Test pass (267/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 26.99
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 16.11
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.1
17 TestDownloadOnly/v1.29.0-rc.2/json-events 17.99
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.1
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
26 TestBinaryMirror 0.65
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 133.21
34 TestAddons/parallel/Registry 15.8
36 TestAddons/parallel/InspektorGadget 12.29
37 TestAddons/parallel/MetricsServer 5.91
40 TestAddons/parallel/CSI 75.21
41 TestAddons/parallel/Headlamp 12.09
42 TestAddons/parallel/CloudSpanner 5.69
43 TestAddons/parallel/LocalPath 11.81
44 TestAddons/parallel/NvidiaDevicePlugin 5.67
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.18
49 TestAddons/StoppedEnableDisable 12.46
50 TestCertOptions 37.55
51 TestCertExpiration 234.34
53 TestForceSystemdFlag 41.83
54 TestForceSystemdEnv 43.32
55 TestDockerEnvContainerd 48.44
60 TestErrorSpam/setup 30.41
61 TestErrorSpam/start 0.91
62 TestErrorSpam/status 1.14
63 TestErrorSpam/pause 1.94
64 TestErrorSpam/unpause 2.03
65 TestErrorSpam/stop 1.52
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 57.32
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 6.26
72 TestFunctional/serial/KubeContext 0.07
73 TestFunctional/serial/KubectlGetPods 0.1
76 TestFunctional/serial/CacheCmd/cache/add_remote 4.15
77 TestFunctional/serial/CacheCmd/cache/add_local 1.5
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.25
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.18
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
87 TestFunctional/serial/LogsCmd 1.59
91 TestFunctional/parallel/ConfigCmd 0.64
92 TestFunctional/parallel/DashboardCmd 11.64
93 TestFunctional/parallel/DryRun 0.56
94 TestFunctional/parallel/InternationalLanguage 0.3
95 TestFunctional/parallel/StatusCmd 1.58
99 TestFunctional/parallel/ServiceCmdConnect 10.78
100 TestFunctional/parallel/AddonsCmd 0.2
101 TestFunctional/parallel/PersistentVolumeClaim 77.9
103 TestFunctional/parallel/SSHCmd 0.83
104 TestFunctional/parallel/CpCmd 2.3
106 TestFunctional/parallel/FileSync 0.38
107 TestFunctional/parallel/CertSync 2.43
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
115 TestFunctional/parallel/License 0.35
116 TestFunctional/parallel/Version/short 0.08
117 TestFunctional/parallel/Version/components 1.37
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.61
123 TestFunctional/parallel/ImageCommands/Setup 2.45
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.66
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 66.35
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.68
152 TestFunctional/parallel/ProfileCmd/profile_list 0.63
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.74
154 TestFunctional/parallel/MountCmd/any-port 8.15
155 TestFunctional/parallel/MountCmd/specific-port 2.71
156 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
157 TestFunctional/delete_addon-resizer_images 0.09
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 111.65
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.53
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.73
170 TestJSONOutput/start/Command 75.38
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.97
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.76
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.86
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.27
195 TestKicCustomNetwork/create_custom_network 43.32
196 TestKicCustomNetwork/use_default_bridge_network 34.75
197 TestKicExistingNetwork 33.21
198 TestKicCustomSubnet 37.17
199 TestKicStaticIP 35.84
200 TestMainNoArgs 0.08
201 TestMinikubeProfile 71.23
204 TestMountStart/serial/StartWithMountFirst 6.21
205 TestMountStart/serial/VerifyMountFirst 0.3
206 TestMountStart/serial/StartWithMountSecond 6.78
207 TestMountStart/serial/VerifyMountSecond 0.29
208 TestMountStart/serial/DeleteFirst 1.67
209 TestMountStart/serial/VerifyMountPostDelete 0.3
210 TestMountStart/serial/Stop 1.23
211 TestMountStart/serial/RestartStopped 7.47
212 TestMountStart/serial/VerifyMountPostStop 0.31
215 TestMultiNode/serial/FreshStart2Nodes 71.49
216 TestMultiNode/serial/DeployApp2Nodes 4.73
217 TestMultiNode/serial/PingHostFrom2Pods 1.14
218 TestMultiNode/serial/AddNode 18.07
219 TestMultiNode/serial/MultiNodeLabels 0.1
220 TestMultiNode/serial/ProfileList 0.36
221 TestMultiNode/serial/CopyFile 11.56
222 TestMultiNode/serial/StopNode 2.45
223 TestMultiNode/serial/StartAfterStop 12.59
224 TestMultiNode/serial/RestartKeepsNodes 119.98
225 TestMultiNode/serial/DeleteNode 5.32
226 TestMultiNode/serial/StopMultiNode 24.36
227 TestMultiNode/serial/RestartMultiNode 80.74
228 TestMultiNode/serial/ValidateNameConflict 34.68
233 TestPreload 178.61
235 TestScheduledStopUnix 110.64
238 TestInsufficientStorage 13.9
239 TestRunningBinaryUpgrade 87.93
241 TestKubernetesUpgrade 375.06
242 TestMissingContainerUpgrade 166.2
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
245 TestNoKubernetes/serial/StartWithK8s 40.28
246 TestNoKubernetes/serial/StartWithStopK8s 20.75
247 TestNoKubernetes/serial/Start 8.93
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
249 TestNoKubernetes/serial/ProfileList 1.14
250 TestNoKubernetes/serial/Stop 1.31
251 TestNoKubernetes/serial/StartNoArgs 7.73
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
253 TestStoppedBinaryUpgrade/Setup 1.73
254 TestStoppedBinaryUpgrade/Upgrade 109.64
255 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
264 TestPause/serial/Start 83.07
265 TestPause/serial/SecondStartNoReconfiguration 7.79
266 TestPause/serial/Pause 1.09
267 TestPause/serial/VerifyStatus 0.43
268 TestPause/serial/Unpause 1.09
269 TestPause/serial/PauseAgain 1.09
270 TestPause/serial/DeletePaused 3.5
271 TestPause/serial/VerifyDeletedResources 0.33
279 TestNetworkPlugins/group/false 5.34
284 TestStartStop/group/old-k8s-version/serial/FirstStart 128.71
285 TestStartStop/group/old-k8s-version/serial/DeployApp 9.53
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
287 TestStartStop/group/old-k8s-version/serial/Stop 12.16
288 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
289 TestStartStop/group/old-k8s-version/serial/SecondStart 665.04
291 TestStartStop/group/no-preload/serial/FirstStart 73.72
292 TestStartStop/group/no-preload/serial/DeployApp 9.38
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
294 TestStartStop/group/no-preload/serial/Stop 12.14
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
296 TestStartStop/group/no-preload/serial/SecondStart 344.08
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.07
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
299 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
300 TestStartStop/group/no-preload/serial/Pause 3.63
302 TestStartStop/group/embed-certs/serial/FirstStart 61.18
303 TestStartStop/group/embed-certs/serial/DeployApp 8.37
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
305 TestStartStop/group/embed-certs/serial/Stop 12.23
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/embed-certs/serial/SecondStart 342.47
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
311 TestStartStop/group/old-k8s-version/serial/Pause 3.54
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.26
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.3
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.22
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.34
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
322 TestStartStop/group/embed-certs/serial/Pause 3.56
324 TestStartStop/group/newest-cni/serial/FirstStart 48.36
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
327 TestStartStop/group/newest-cni/serial/Stop 1.28
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
329 TestStartStop/group/newest-cni/serial/SecondStart 31.63
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
333 TestStartStop/group/newest-cni/serial/Pause 3.42
334 TestNetworkPlugins/group/auto/Start 60.76
335 TestNetworkPlugins/group/auto/KubeletFlags 0.35
336 TestNetworkPlugins/group/auto/NetCatPod 8.3
337 TestNetworkPlugins/group/auto/DNS 0.19
338 TestNetworkPlugins/group/auto/Localhost 0.18
339 TestNetworkPlugins/group/auto/HairPin 0.18
340 TestNetworkPlugins/group/kindnet/Start 88.54
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.01
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.18
345 TestNetworkPlugins/group/calico/Start 78.84
346 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.56
348 TestNetworkPlugins/group/kindnet/NetCatPod 9.4
349 TestNetworkPlugins/group/kindnet/DNS 0.34
350 TestNetworkPlugins/group/kindnet/Localhost 0.29
351 TestNetworkPlugins/group/kindnet/HairPin 0.31
352 TestNetworkPlugins/group/calico/ControllerPod 6.02
353 TestNetworkPlugins/group/custom-flannel/Start 67.65
354 TestNetworkPlugins/group/calico/KubeletFlags 0.41
355 TestNetworkPlugins/group/calico/NetCatPod 11.43
356 TestNetworkPlugins/group/calico/DNS 0.29
357 TestNetworkPlugins/group/calico/Localhost 0.21
358 TestNetworkPlugins/group/calico/HairPin 0.21
359 TestNetworkPlugins/group/enable-default-cni/Start 87.86
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.48
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
362 TestNetworkPlugins/group/custom-flannel/DNS 0.25
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
365 TestNetworkPlugins/group/flannel/Start 57.96
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
373 TestNetworkPlugins/group/bridge/Start 87.62
374 TestNetworkPlugins/group/flannel/NetCatPod 9.4
375 TestNetworkPlugins/group/flannel/DNS 0.27
376 TestNetworkPlugins/group/flannel/Localhost 0.21
377 TestNetworkPlugins/group/flannel/HairPin 0.2
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
379 TestNetworkPlugins/group/bridge/NetCatPod 11.26
380 TestNetworkPlugins/group/bridge/DNS 0.19
381 TestNetworkPlugins/group/bridge/Localhost 0.17
382 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (26.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-896079 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-896079 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (26.988707061s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (26.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-896079
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-896079: exit status 85 (96.385824ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-896079 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-896079        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:29.840151  654811 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:29.840304  654811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:29.840315  654811 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:29.840322  654811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:29.840585  654811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	W0108 20:09:29.840706  654811 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-649468/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-649468/.minikube/config/config.json: no such file or directory
	I0108 20:09:29.841189  654811 out.go:303] Setting JSON to true
	I0108 20:09:29.842005  654811 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10310,"bootTime":1704734260,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:09:29.842079  654811 start.go:138] virtualization:  
	I0108 20:09:29.845197  654811 out.go:97] [download-only-896079] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:09:29.847847  654811 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:29.845520  654811 notify.go:220] Checking for updates...
	W0108 20:09:29.845459  654811 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 20:09:29.849794  654811 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:29.852199  654811 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:09:29.854276  654811 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:09:29.856052  654811 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 20:09:29.861224  654811 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:29.861495  654811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:09:29.886000  654811 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:29.886114  654811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:29.964852  654811 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-08 20:09:29.955044175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:29.964956  654811 docker.go:295] overlay module found
	I0108 20:09:29.967115  654811 out.go:97] Using the docker driver based on user configuration
	I0108 20:09:29.967141  654811 start.go:298] selected driver: docker
	I0108 20:09:29.967147  654811 start.go:902] validating driver "docker" against <nil>
	I0108 20:09:29.967269  654811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:30.041778  654811 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-08 20:09:30.029530219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:30.041973  654811 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:09:30.042274  654811 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0108 20:09:30.042530  654811 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:09:30.044930  654811 out.go:169] Using Docker driver with root privileges
	I0108 20:09:30.047624  654811 cni.go:84] Creating CNI manager for ""
	I0108 20:09:30.047672  654811 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:09:30.047694  654811 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:09:30.047710  654811 start_flags.go:323] config:
	{Name:download-only-896079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-896079 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:30.050229  654811 out.go:97] Starting control plane node download-only-896079 in cluster download-only-896079
	I0108 20:09:30.050298  654811 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:09:30.052481  654811 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:30.052547  654811 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 20:09:30.052636  654811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:30.072808  654811 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:30.073555  654811 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:30.073681  654811 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:30.118890  654811 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0108 20:09:30.118930  654811 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:30.119101  654811 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 20:09:30.122228  654811 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:09:30.122272  654811 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:09:30.237868  654811 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0108 20:09:36.553302  654811 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-896079"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (16.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-896079 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-896079 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.113648972s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (16.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-896079
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-896079: exit status 85 (95.414079ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-896079 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-896079        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-896079 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-896079        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:56.929298  654887 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:56.929461  654887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:56.929471  654887 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:56.929478  654887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:56.929861  654887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	W0108 20:09:56.930051  654887 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-649468/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-649468/.minikube/config/config.json: no such file or directory
	I0108 20:09:56.930347  654887 out.go:303] Setting JSON to true
	I0108 20:09:56.931312  654887 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10337,"bootTime":1704734260,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:09:56.931414  654887 start.go:138] virtualization:  
	I0108 20:09:56.933822  654887 out.go:97] [download-only-896079] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:09:56.936059  654887 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:56.934327  654887 notify.go:220] Checking for updates...
	I0108 20:09:56.940459  654887 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:56.942596  654887 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:09:56.944507  654887 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:09:56.946443  654887 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 20:09:56.950552  654887 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:56.951071  654887 config.go:182] Loaded profile config "download-only-896079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0108 20:09:56.951160  654887 start.go:810] api.Load failed for download-only-896079: filestore "download-only-896079": Docker machine "download-only-896079" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:56.951259  654887 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:09:56.951287  654887 start.go:810] api.Load failed for download-only-896079: filestore "download-only-896079": Docker machine "download-only-896079" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:56.977024  654887 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:56.977177  654887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:57.069596  654887 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:09:57.05939454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:57.069709  654887 docker.go:295] overlay module found
	I0108 20:09:57.072018  654887 out.go:97] Using the docker driver based on existing profile
	I0108 20:09:57.072062  654887 start.go:298] selected driver: docker
	I0108 20:09:57.072074  654887 start.go:902] validating driver "docker" against &{Name:download-only-896079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-896079 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:57.072283  654887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:57.151253  654887 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:09:57.141935182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:57.151729  654887 cni.go:84] Creating CNI manager for ""
	I0108 20:09:57.151753  654887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:09:57.151767  654887 start_flags.go:323] config:
	{Name:download-only-896079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-896079 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s GPUs:}
	I0108 20:09:57.154139  654887 out.go:97] Starting control plane node download-only-896079 in cluster download-only-896079
	I0108 20:09:57.154163  654887 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:09:57.156068  654887 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:57.156092  654887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:09:57.156143  654887 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:57.179076  654887 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:57.179232  654887 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:57.179253  654887 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:09:57.179258  654887 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:09:57.179267  654887 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:57.350116  654887 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0108 20:09:57.350157  654887 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:57.350926  654887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 20:09:57.353178  654887 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 20:09:57.353205  654887 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:09:57.498822  654887 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0108 20:10:11.302103  654887 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:10:11.302227  654887 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-896079"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (17.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-896079 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-896079 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.989428462s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (17.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-896079
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-896079: exit status 85 (97.581125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-896079 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-896079           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-896079 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-896079           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-896079 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-896079           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:13.136782  654957 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:10:13.136910  654957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:13.136920  654957 out.go:309] Setting ErrFile to fd 2...
	I0108 20:10:13.136926  654957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:13.137214  654957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	W0108 20:10:13.137374  654957 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-649468/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-649468/.minikube/config/config.json: no such file or directory
	I0108 20:10:13.137632  654957 out.go:303] Setting JSON to true
	I0108 20:10:13.138455  654957 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10354,"bootTime":1704734260,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:10:13.138530  654957 start.go:138] virtualization:  
	I0108 20:10:13.141536  654957 out.go:97] [download-only-896079] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:10:13.143794  654957 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:10:13.141852  654957 notify.go:220] Checking for updates...
	I0108 20:10:13.147849  654957 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:10:13.149847  654957 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:10:13.152239  654957 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:10:13.154263  654957 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 20:10:13.158134  654957 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:10:13.158812  654957 config.go:182] Loaded profile config "download-only-896079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	W0108 20:10:13.158893  654957 start.go:810] api.Load failed for download-only-896079: filestore "download-only-896079": Docker machine "download-only-896079" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:13.159018  654957 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:10:13.159050  654957 start.go:810] api.Load failed for download-only-896079: filestore "download-only-896079": Docker machine "download-only-896079" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:13.183668  654957 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:10:13.183794  654957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:13.267141  654957 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:13.255649685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:13.267259  654957 docker.go:295] overlay module found
	I0108 20:10:13.269349  654957 out.go:97] Using the docker driver based on existing profile
	I0108 20:10:13.269401  654957 start.go:298] selected driver: docker
	I0108 20:10:13.269412  654957 start.go:902] validating driver "docker" against &{Name:download-only-896079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-896079 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:13.269586  654957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:13.337647  654957 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:13.328151318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:13.338124  654957 cni.go:84] Creating CNI manager for ""
	I0108 20:10:13.338149  654957 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0108 20:10:13.338161  654957 start_flags.go:323] config:
	{Name:download-only-896079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-896079 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPause
Interval:1m0s GPUs:}
	I0108 20:10:13.340803  654957 out.go:97] Starting control plane node download-only-896079 in cluster download-only-896079
	I0108 20:10:13.340834  654957 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0108 20:10:13.343016  654957 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:10:13.343048  654957 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0108 20:10:13.343156  654957 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:10:13.359933  654957 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:10:13.360132  654957 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:10:13.360151  654957 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:10:13.360156  654957 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:10:13.360164  654957 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:10:13.422020  654957 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0108 20:10:13.422049  654957 cache.go:56] Caching tarball of preloaded images
	I0108 20:10:13.422252  654957 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0108 20:10:13.424464  654957 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 20:10:13.424491  654957 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0108 20:10:13.524555  654957 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:0568f1be2490e32c4c0c2b492b9a5803 -> /home/jenkins/minikube-integration/17907-649468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-896079"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-896079
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-553324 --alsologtostderr --binary-mirror http://127.0.0.1:42199 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-553324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-553324
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-241374
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-241374: exit status 85 (107.113755ms)

                                                
                                                
-- stdout --
	* Profile "addons-241374" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-241374"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-241374
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-241374: exit status 85 (101.724259ms)

                                                
                                                
-- stdout --
	* Profile "addons-241374" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-241374"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (133.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-241374 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-241374 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m13.206658958s)
--- PASS: TestAddons/Setup (133.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 58.768415ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bs6q6" [ddd85010-11ab-4e86-bf0c-f8de74575f5c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004363469s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kdvlt" [e713e557-9ca9-4695-8167-78ff9123c199] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004482611s
addons_test.go:340: (dbg) Run:  kubectl --context addons-241374 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-241374 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-241374 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.488886358s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 ip
2024/01/08 20:13:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h9wlx" [a5a13642-5932-4c6d-b47d-4eca68264c73] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004290792s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-241374
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-241374: (6.282914721s)
--- PASS: TestAddons/parallel/InspektorGadget (12.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.55206ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-9l75h" [f5f8810c-2a29-4938-bdcb-0822875f4a38] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005283109s
addons_test.go:415: (dbg) Run:  kubectl --context addons-241374 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (75.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 59.087698ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-241374 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-241374 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4d617377-0276-4e23-89d4-62004dd2a363] Pending
helpers_test.go:344: "task-pv-pod" [4d617377-0276-4e23-89d4-62004dd2a363] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4d617377-0276-4e23-89d4-62004dd2a363] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00719675s
addons_test.go:584: (dbg) Run:  kubectl --context addons-241374 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-241374 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-241374 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-241374 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-241374 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-241374 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-241374 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d3f57c8f-64ae-42e9-8b50-f19f6e5cd878] Pending
helpers_test.go:344: "task-pv-pod-restore" [d3f57c8f-64ae-42e9-8b50-f19f6e5cd878] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d3f57c8f-64ae-42e9-8b50-f19f6e5cd878] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003888952s
addons_test.go:626: (dbg) Run:  kubectl --context addons-241374 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-241374 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-241374 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-241374 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.863160604s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-arm64 -p addons-241374 addons disable volumesnapshots --alsologtostderr -v=1: (1.049957713s)
--- PASS: TestAddons/parallel/CSI (75.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-241374 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-241374 --alsologtostderr -v=1: (2.088324084s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-psmrn" [a23f06c6-785f-4dff-aa68-af4e623a6ce0] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-psmrn" [a23f06c6-785f-4dff-aa68-af4e623a6ce0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-psmrn" [a23f06c6-785f-4dff-aa68-af4e623a6ce0] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003658677s
--- PASS: TestAddons/parallel/Headlamp (12.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-m97p9" [6885f804-7129-4e17-8652-c564924e4259] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004359314s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-241374
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-241374 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-241374 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [87262ec2-2ea2-40a3-b68b-d081431dc402] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [87262ec2-2ea2-40a3-b68b-d081431dc402] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [87262ec2-2ea2-40a3-b68b-d081431dc402] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004927013s
addons_test.go:891: (dbg) Run:  kubectl --context addons-241374 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 ssh "cat /opt/local-path-provisioner/pvc-88f0a05f-751c-46df-9b15-285db5e558d1_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-241374 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-241374 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-241374 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w6v8f" [69e2a053-8552-4fbc-a1c1-810e10c9fc21] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005453145s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-241374
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-hxqzk" [5eefe72e-273d-45b4-adc5-eb506c2834f7] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004357726s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-241374 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-241374 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-241374
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-241374: (12.141972567s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-241374
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-241374
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-241374
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (37.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-261123 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-261123 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.713027197s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-261123 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-261123 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-261123 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-261123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-261123
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-261123: (2.085924443s)
--- PASS: TestCertOptions (37.55s)

                                                
                                    
x
+
TestCertExpiration (234.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-839801 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-839801 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.011698072s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-839801 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-839801 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.696853194s)
helpers_test.go:175: Cleaning up "cert-expiration-839801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-839801
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-839801: (4.629579014s)
--- PASS: TestCertExpiration (234.34s)

                                                
                                    
x
+
TestForceSystemdFlag (41.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-904652 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-904652 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.173834415s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-904652 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-904652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-904652
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-904652: (2.220089484s)
--- PASS: TestForceSystemdFlag (41.83s)

                                                
                                    
x
+
TestForceSystemdEnv (43.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-037027 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-037027 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.284832041s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-037027 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-037027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-037027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-037027: (2.603335162s)
--- PASS: TestForceSystemdEnv (43.32s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.44s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-135997 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-135997 --driver=docker  --container-runtime=containerd: (31.936399323s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-135997"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-135997": (1.402940292s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-V9txidqYLxlW/agent.671978" SSH_AGENT_PID="671979" DOCKER_HOST=ssh://docker@127.0.0.1:33418 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-V9txidqYLxlW/agent.671978" SSH_AGENT_PID="671979" DOCKER_HOST=ssh://docker@127.0.0.1:33418 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-V9txidqYLxlW/agent.671978" SSH_AGENT_PID="671979" DOCKER_HOST=ssh://docker@127.0.0.1:33418 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.635495833s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-V9txidqYLxlW/agent.671978" SSH_AGENT_PID="671979" DOCKER_HOST=ssh://docker@127.0.0.1:33418 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-135997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-135997
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-135997: (2.049900158s)
--- PASS: TestDockerEnvContainerd (48.44s)

                                                
                                    
x
+
TestErrorSpam/setup (30.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-917904 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-917904 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-917904 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-917904 --driver=docker  --container-runtime=containerd: (30.406503194s)
--- PASS: TestErrorSpam/setup (30.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 unpause
--- PASS: TestErrorSpam/unpause (2.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 stop: (1.276580295s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-917904 --log_dir /tmp/nospam-917904 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17907-649468/.minikube/files/etc/test/nested/copy/654805/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-819954 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-819954 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (57.320318514s)
--- PASS: TestFunctional/serial/StartWithProxy (57.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-819954 --alsologtostderr -v=8
E0108 20:17:46.393178  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:46.400034  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:46.410251  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:46.430724  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:46.471579  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:46.552497  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:46.713076  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:47.034127  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:47.675669  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:17:48.955915  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-819954 --alsologtostderr -v=8: (6.259525754s)
functional_test.go:659: soft start took 6.260782598s for "functional-819954" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-819954 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 cache add registry.k8s.io/pause:3.1: (1.493791005s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cache add registry.k8s.io/pause:3.3
E0108 20:17:51.516737  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 cache add registry.k8s.io/pause:3.3: (1.39209372s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 cache add registry.k8s.io/pause:latest: (1.266663641s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-819954 /tmp/TestFunctionalserialCacheCmdcacheadd_local2899162128/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cache add minikube-local-cache-test:functional-819954
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cache delete minikube-local-cache-test:functional-819954
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-819954
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (351.25754ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cache reload
E0108 20:17:56.636976  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 cache reload: (1.153890203s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 kubectl -- --context functional-819954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-819954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 logs: (1.588073474s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 config get cpus: exit status 14 (111.39874ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 config get cpus: exit status 14 (124.333969ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-819954 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-819954 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 687405: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-819954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-819954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (271.410647ms)

                                                
                                                
-- stdout --
	* [functional-819954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:20:03.238469  686895 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:20:03.238920  686895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:20:03.238934  686895 out.go:309] Setting ErrFile to fd 2...
	I0108 20:20:03.238941  686895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:20:03.239336  686895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:20:03.239786  686895 out.go:303] Setting JSON to false
	I0108 20:20:03.240874  686895 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10944,"bootTime":1704734260,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:20:03.240984  686895 start.go:138] virtualization:  
	I0108 20:20:03.245085  686895 out.go:177] * [functional-819954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:20:03.247164  686895 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:20:03.247219  686895 notify.go:220] Checking for updates...
	I0108 20:20:03.250556  686895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:20:03.253755  686895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:20:03.256084  686895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:20:03.258200  686895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:20:03.260302  686895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:20:03.263019  686895 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:20:03.263861  686895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:20:03.297745  686895 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:20:03.297952  686895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:20:03.400030  686895 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:20:03.389345219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:20:03.400175  686895 docker.go:295] overlay module found
	I0108 20:20:03.402971  686895 out.go:177] * Using the docker driver based on existing profile
	I0108 20:20:03.404675  686895 start.go:298] selected driver: docker
	I0108 20:20:03.404693  686895 start.go:902] validating driver "docker" against &{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:20:03.404810  686895 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:20:03.407743  686895 out.go:177] 
	W0108 20:20:03.409960  686895 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 20:20:03.411820  686895 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-819954 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-819954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-819954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (304.636544ms)

                                                
                                                
-- stdout --
	* [functional-819954] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:20:03.807576  687028 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:20:03.807819  687028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:20:03.807850  687028 out.go:309] Setting ErrFile to fd 2...
	I0108 20:20:03.807874  687028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:20:03.811532  687028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:20:03.812106  687028 out.go:303] Setting JSON to false
	I0108 20:20:03.813177  687028 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10944,"bootTime":1704734260,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:20:03.814713  687028 start.go:138] virtualization:  
	I0108 20:20:03.817521  687028 out.go:177] * [functional-819954] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0108 20:20:03.821189  687028 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:20:03.823440  687028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:20:03.821298  687028 notify.go:220] Checking for updates...
	I0108 20:20:03.827833  687028 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:20:03.830201  687028 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:20:03.832161  687028 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:20:03.834287  687028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:20:03.836634  687028 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:20:03.837308  687028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:20:03.874580  687028 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:20:03.874708  687028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:20:03.991057  687028 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:20:03.980681414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:20:03.991171  687028 docker.go:295] overlay module found
	I0108 20:20:03.993409  687028 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 20:20:03.995408  687028 start.go:298] selected driver: docker
	I0108 20:20:03.995431  687028 start.go:902] validating driver "docker" against &{Name:functional-819954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-819954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:20:03.995587  687028 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:20:03.998388  687028 out.go:177] 
	W0108 20:20:04.000125  687028 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 20:20:04.009106  687028 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-819954 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-819954 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-gzqp2" [377e3e29-7897-4ea3-b0eb-bb8ef7e7c885] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-gzqp2" [377e3e29-7897-4ea3-b0eb-bb8ef7e7c885] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005147489s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31382
functional_test.go:1674: http://192.168.49.2:31382: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-gzqp2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31382
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (77.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1a29cdd1-3689-4c64-b1f6-78051dd0f4cd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004225865s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-819954 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-819954 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-819954 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-819954 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-819954 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-819954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [58dfca0f-1132-465f-aebf-009d6185ae46] Pending
E0108 20:19:08.318772  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [58dfca0f-1132-465f-aebf-009d6185ae46] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [58dfca0f-1132-465f-aebf-009d6185ae46] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 59.004623031s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-819954 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-819954 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-819954 delete -f testdata/storage-provisioner/pod.yaml: (1.359112661s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-819954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [08587fd9-7200-469e-8905-c70d3ee1f798] Pending
helpers_test.go:344: "sp-pod" [08587fd9-7200-469e-8905-c70d3ee1f798] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005892097s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-819954 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (77.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh -n functional-819954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cp functional-819954:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1918641299/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh -n functional-819954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh -n functional-819954 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/654805/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /etc/test/nested/copy/654805/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/654805.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /etc/ssl/certs/654805.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/654805.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /usr/share/ca-certificates/654805.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/6548052.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /etc/ssl/certs/6548052.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/6548052.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /usr/share/ca-certificates/6548052.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh "sudo systemctl is-active docker": exit status 1 (374.703516ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh "sudo systemctl is-active crio": exit status 1 (410.137826ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 version -o=json --components: (1.366241343s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-819954 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-819954
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-819954 image ls --format short --alsologtostderr:
I0108 20:20:16.486853  688797 out.go:296] Setting OutFile to fd 1 ...
I0108 20:20:16.487012  688797 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:16.487017  688797 out.go:309] Setting ErrFile to fd 2...
I0108 20:20:16.487023  688797 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:16.487303  688797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
I0108 20:20:16.487988  688797 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:16.488118  688797 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:16.488769  688797 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
I0108 20:20:16.521368  688797 ssh_runner.go:195] Run: systemctl --version
I0108 20:20:16.521427  688797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
I0108 20:20:16.546704  688797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
I0108 20:20:16.660005  688797 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-819954 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-819954  | sha256:99a6e3 | 1kB    |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | alpine             | sha256:74077e | 17.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/nginx                     | latest             | sha256:8aea65 | 67.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-819954 image ls --format table --alsologtostderr:
I0108 20:20:17.448064  688974 out.go:296] Setting OutFile to fd 1 ...
I0108 20:20:17.448513  688974 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:17.448546  688974 out.go:309] Setting ErrFile to fd 2...
I0108 20:20:17.448568  688974 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:17.448856  688974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
I0108 20:20:17.449609  688974 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:17.453305  688974 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:17.454029  688974 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
I0108 20:20:17.480693  688974 ssh_runner.go:195] Run: systemctl --version
I0108 20:20:17.480757  688974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
I0108 20:20:17.514095  688974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
I0108 20:20:17.615479  688974 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-819954 image ls --format json --alsologtostderr:
[{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c
53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026"],"repoTags":["docker.io/library/nginx:latest"],"size":"67219394"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikub
e/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d
0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:99a6e3b8675e7aef75190abe5ffdba4938988d94506d5952e203b0ea222ef25a","repoDigests":[],"repoTags":["docker.io/library/miniku
be-local-cache-test:functional-819954"],"size":"1005"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17610338"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-819954 image ls --format json --alsologtostderr:
I0108 20:20:17.123642  688900 out.go:296] Setting OutFile to fd 1 ...
I0108 20:20:17.123828  688900 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:17.123834  688900 out.go:309] Setting ErrFile to fd 2...
I0108 20:20:17.123839  688900 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:17.124118  688900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
I0108 20:20:17.124784  688900 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:17.124930  688900 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:17.125524  688900 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
I0108 20:20:17.147871  688900 ssh_runner.go:195] Run: systemctl --version
I0108 20:20:17.147929  688900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
I0108 20:20:17.169435  688900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
I0108 20:20:17.292514  688900 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-819954 image ls --format yaml --alsologtostderr:
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:99a6e3b8675e7aef75190abe5ffdba4938988d94506d5952e203b0ea222ef25a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-819954
size: "1005"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
repoTags:
- docker.io/library/nginx:latest
size: "67219394"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "17610338"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-819954 image ls --format yaml --alsologtostderr:
I0108 20:20:16.797020  688864 out.go:296] Setting OutFile to fd 1 ...
I0108 20:20:16.797217  688864 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:16.797233  688864 out.go:309] Setting ErrFile to fd 2...
I0108 20:20:16.797245  688864 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:16.797583  688864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
I0108 20:20:16.798521  688864 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:16.798708  688864 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:16.799341  688864 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
I0108 20:20:16.824136  688864 ssh_runner.go:195] Run: systemctl --version
I0108 20:20:16.824200  688864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
I0108 20:20:16.847091  688864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
I0108 20:20:16.947224  688864 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh pgrep buildkitd: exit status 1 (441.739068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image build -t localhost/my-image:functional-819954 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-819954 image build -t localhost/my-image:functional-819954 testdata/build --alsologtostderr: (2.897255151s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-819954 image build -t localhost/my-image:functional-819954 testdata/build --alsologtostderr:
I0108 20:20:17.551186  688982 out.go:296] Setting OutFile to fd 1 ...
I0108 20:20:17.551809  688982 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:17.551844  688982 out.go:309] Setting ErrFile to fd 2...
I0108 20:20:17.551866  688982 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:20:17.552197  688982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
I0108 20:20:17.552964  688982 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:17.555456  688982 config.go:182] Loaded profile config "functional-819954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 20:20:17.556099  688982 cli_runner.go:164] Run: docker container inspect functional-819954 --format={{.State.Status}}
I0108 20:20:17.580039  688982 ssh_runner.go:195] Run: systemctl --version
I0108 20:20:17.580091  688982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819954
I0108 20:20:17.599301  688982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/functional-819954/id_rsa Username:docker}
I0108 20:20:17.703446  688982 build_images.go:151] Building image from path: /tmp/build.3257677120.tar
I0108 20:20:17.703527  688982 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 20:20:17.715999  688982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3257677120.tar
I0108 20:20:17.720850  688982 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3257677120.tar: stat -c "%s %y" /var/lib/minikube/build/build.3257677120.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3257677120.tar': No such file or directory
I0108 20:20:17.720890  688982 ssh_runner.go:362] scp /tmp/build.3257677120.tar --> /var/lib/minikube/build/build.3257677120.tar (3072 bytes)
I0108 20:20:17.753469  688982 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3257677120
I0108 20:20:17.765410  688982 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3257677120 -xf /var/lib/minikube/build/build.3257677120.tar
I0108 20:20:17.776988  688982 containerd.go:378] Building image: /var/lib/minikube/build/build.3257677120
I0108 20:20:17.777169  688982 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3257677120 --local dockerfile=/var/lib/minikube/build/build.3257677120 --output type=image,name=localhost/my-image:functional-819954
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3bf8ade3db2e37d82ad094b74657fd99df3157cdb29d98925e9f3bffc4d47b91 0.0s done
#8 exporting config sha256:d456ad926068084d2d95f7584d581ac30c74561523b717db24f13fb63b6c919e 0.0s done
#8 naming to localhost/my-image:functional-819954 done
#8 DONE 0.1s
I0108 20:20:20.320865  688982 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3257677120 --local dockerfile=/var/lib/minikube/build/build.3257677120 --output type=image,name=localhost/my-image:functional-819954: (2.543649232s)
I0108 20:20:20.320963  688982 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3257677120
I0108 20:20:20.333125  688982 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3257677120.tar
I0108 20:20:20.344439  688982 build_images.go:207] Built localhost/my-image:functional-819954 from /tmp/build.3257677120.tar
I0108 20:20:20.344472  688982 build_images.go:123] succeeded building to: functional-819954
I0108 20:20:20.344477  688982 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.426908371s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-819954
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image rm gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-819954
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 image save --daemon gcr.io/google-containers/addon-resizer:functional-819954 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-819954
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-819954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-819954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-819954 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-819954 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 685255: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-819954 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (66.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-819954 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [279a87e5-82f5-46e1-92b1-0a8e3e9c43fc] Pending
helpers_test.go:344: "nginx-svc" [279a87e5-82f5-46e1-92b1-0a8e3e9c43fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [279a87e5-82f5-46e1-92b1-0a8e3e9c43fc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 1m6.003616099s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (66.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-819954 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.178.218 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-819954 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "545.728216ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "88.5738ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "605.215903ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "129.624436ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdany-port3274886081/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704745202633916452" to /tmp/TestFunctionalparallelMountCmdany-port3274886081/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704745202633916452" to /tmp/TestFunctionalparallelMountCmdany-port3274886081/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704745202633916452" to /tmp/TestFunctionalparallelMountCmdany-port3274886081/001/test-1704745202633916452
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (574.227076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:20 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:20 test-1704745202633916452
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh cat /mount-9p/test-1704745202633916452
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-819954 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7c584458-0a92-448a-954e-108862ac78f7] Pending
helpers_test.go:344: "busybox-mount" [7c584458-0a92-448a-954e-108862ac78f7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7c584458-0a92-448a-954e-108862ac78f7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7c584458-0a92-448a-954e-108862ac78f7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004171688s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-819954 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdany-port3274886081/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdspecific-port149394446/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (637.699188ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdspecific-port149394446/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh "sudo umount -f /mount-9p": exit status 1 (495.786365ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-819954 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdspecific-port149394446/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T" /mount1: exit status 1 (763.591071ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-819954 ssh "findmnt -T" /mount3
2024/01/08 20:20:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-819954 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-819954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup690554436/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-819954
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-819954
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-819954
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (111.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-918006 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0108 20:20:30.239859  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-918006 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m51.652529929s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (111.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons enable ingress --alsologtostderr -v=5: (10.532313938s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-918006 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-576696 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0108 20:23:43.201504  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.206745  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.217016  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.237255  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.277568  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.357856  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.518111  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:43.838705  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:44.479566  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:45.759782  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:48.320053  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:23:53.440403  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:24:03.680654  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:24:24.160896  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-576696 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m15.377672367s)
--- PASS: TestJSONOutput/start/Command (75.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.97s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-576696 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.97s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-576696 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-576696 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-576696 --output=json --user=testUser: (5.86318029s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-516794 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-516794 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (103.940577ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a272eef-42c5-45be-933b-a429883d1450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-516794] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"97bf4943-daaa-4c06-a32b-6a052abe6e11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"1a8fba10-e9c8-4239-8c75-78ff3c2e32e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"84cc1552-2381-4193-9e4e-c94e39622aff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig"}}
	{"specversion":"1.0","id":"8a4628ff-c4c6-46c3-8574-c0469b848e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube"}}
	{"specversion":"1.0","id":"0e961092-6b28-41b9-ae66-41946ecb3d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"75344f80-3eb2-4b45-8ee2-f36cd1d6e6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1865804c-6901-477c-8885-1b5e9b484f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-516794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-516794
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-102891 --network=
E0108 20:25:05.121115  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-102891 --network=: (41.229233017s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-102891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-102891
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-102891: (2.06481851s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-686123 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-686123 --network=bridge: (32.735840265s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-686123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-686123
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-686123: (1.990462649s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.75s)

                                                
                                    
x
+
TestKicExistingNetwork (33.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-918160 --network=existing-network
E0108 20:26:27.042091  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-918160 --network=existing-network: (30.979909413s)
helpers_test.go:175: Cleaning up "existing-network-918160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-918160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-918160: (2.076766899s)
--- PASS: TestKicExistingNetwork (33.21s)

                                                
                                    
x
+
TestKicCustomSubnet (37.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-713589 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-713589 --subnet=192.168.60.0/24: (34.961589447s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-713589 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-713589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-713589
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-713589: (2.184438192s)
--- PASS: TestKicCustomSubnet (37.17s)

                                                
                                    
x
+
TestKicStaticIP (35.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-302952 --static-ip=192.168.200.200
E0108 20:27:26.515996  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:26.521448  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:26.531786  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:26.552185  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:26.592887  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:26.673545  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:26.834079  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:27.155016  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:27.795886  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:29.076754  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:31.637096  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:36.757351  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:27:46.391313  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:27:46.998555  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-302952 --static-ip=192.168.200.200: (33.174030317s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-302952 ip
helpers_test.go:175: Cleaning up "static-ip-302952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-302952
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-302952: (2.473577847s)
--- PASS: TestKicStaticIP (35.84s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (71.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-193115 --driver=docker  --container-runtime=containerd
E0108 20:28:07.478811  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-193115 --driver=docker  --container-runtime=containerd: (32.362478784s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-195675 --driver=docker  --container-runtime=containerd
E0108 20:28:43.201514  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:28:48.439813  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-195675 --driver=docker  --container-runtime=containerd: (33.257284288s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-193115
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-195675
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-195675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-195675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-195675: (2.032742755s)
helpers_test.go:175: Cleaning up "first-193115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-193115
E0108 20:29:10.882744  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-193115: (2.246398969s)
--- PASS: TestMinikubeProfile (71.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-169419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-169419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.206607174s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-169419 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-171322 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-171322 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.775445206s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-171322 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-169419 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-169419 --alsologtostderr -v=5: (1.66850908s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-171322 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-171322
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-171322: (1.226722544s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-171322
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-171322: (6.472916374s)
--- PASS: TestMountStart/serial/RestartStopped (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-171322 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-308765 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0108 20:30:10.360103  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-308765 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m10.888289802s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-308765 -- rollout status deployment/busybox: (2.556263628s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-fwld5 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-xg2lw -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-fwld5 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-xg2lw -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-fwld5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-xg2lw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-fwld5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-fwld5 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-xg2lw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-308765 -- exec busybox-5bc68d56bd-xg2lw -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.14s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-308765 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-308765 -v 3 --alsologtostderr: (17.34488419s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-308765 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp testdata/cp-test.txt multinode-308765:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3663037003/001/cp-test_multinode-308765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765:/home/docker/cp-test.txt multinode-308765-m02:/home/docker/cp-test_multinode-308765_multinode-308765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m02 "sudo cat /home/docker/cp-test_multinode-308765_multinode-308765-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765:/home/docker/cp-test.txt multinode-308765-m03:/home/docker/cp-test_multinode-308765_multinode-308765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m03 "sudo cat /home/docker/cp-test_multinode-308765_multinode-308765-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp testdata/cp-test.txt multinode-308765-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3663037003/001/cp-test_multinode-308765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765-m02:/home/docker/cp-test.txt multinode-308765:/home/docker/cp-test_multinode-308765-m02_multinode-308765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765 "sudo cat /home/docker/cp-test_multinode-308765-m02_multinode-308765.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765-m02:/home/docker/cp-test.txt multinode-308765-m03:/home/docker/cp-test_multinode-308765-m02_multinode-308765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m03 "sudo cat /home/docker/cp-test_multinode-308765-m02_multinode-308765-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp testdata/cp-test.txt multinode-308765-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3663037003/001/cp-test_multinode-308765-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765-m03:/home/docker/cp-test.txt multinode-308765:/home/docker/cp-test_multinode-308765-m03_multinode-308765.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765 "sudo cat /home/docker/cp-test_multinode-308765-m03_multinode-308765.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 cp multinode-308765-m03:/home/docker/cp-test.txt multinode-308765-m02:/home/docker/cp-test_multinode-308765-m03_multinode-308765-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 ssh -n multinode-308765-m02 "sudo cat /home/docker/cp-test_multinode-308765-m03_multinode-308765-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-308765 node stop m03: (1.255021405s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-308765 status: exit status 7 (599.296656ms)

                                                
                                                
-- stdout --
	multinode-308765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-308765-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-308765-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr: exit status 7 (599.084212ms)

                                                
                                                
-- stdout --
	multinode-308765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-308765-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-308765-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:31:27.938046  736205 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:31:27.938202  736205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:31:27.938212  736205 out.go:309] Setting ErrFile to fd 2...
	I0108 20:31:27.938218  736205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:31:27.938552  736205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:31:27.938753  736205 out.go:303] Setting JSON to false
	I0108 20:31:27.938841  736205 mustload.go:65] Loading cluster: multinode-308765
	I0108 20:31:27.938939  736205 notify.go:220] Checking for updates...
	I0108 20:31:27.939295  736205 config.go:182] Loaded profile config "multinode-308765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:31:27.939314  736205 status.go:255] checking status of multinode-308765 ...
	I0108 20:31:27.940335  736205 cli_runner.go:164] Run: docker container inspect multinode-308765 --format={{.State.Status}}
	I0108 20:31:27.964436  736205 status.go:330] multinode-308765 host status = "Running" (err=<nil>)
	I0108 20:31:27.964465  736205 host.go:66] Checking if "multinode-308765" exists ...
	I0108 20:31:27.964809  736205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-308765
	I0108 20:31:27.986736  736205 host.go:66] Checking if "multinode-308765" exists ...
	I0108 20:31:27.987088  736205 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:31:27.987144  736205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-308765
	I0108 20:31:28.023280  736205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/multinode-308765/id_rsa Username:docker}
	I0108 20:31:28.119856  736205 ssh_runner.go:195] Run: systemctl --version
	I0108 20:31:28.125645  736205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:31:28.139785  736205 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:31:28.222218  736205 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-08 20:31:28.21130177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:31:28.222824  736205 kubeconfig.go:92] found "multinode-308765" server: "https://192.168.58.2:8443"
	I0108 20:31:28.222848  736205 api_server.go:166] Checking apiserver status ...
	I0108 20:31:28.222890  736205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:31:28.237333  736205 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I0108 20:31:28.248868  736205 api_server.go:182] apiserver freezer: "11:freezer:/docker/a80df07e2268d6af498b0baa44179005e4329d9f22524cbf55186884f528c98a/kubepods/burstable/pod1497134dbee8d6b345a73444394781ca/c22eabc72d49adf9a9fb8bdc4e7e770ea7c948f121c23abda901c680d50f4802"
	I0108 20:31:28.248957  736205 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a80df07e2268d6af498b0baa44179005e4329d9f22524cbf55186884f528c98a/kubepods/burstable/pod1497134dbee8d6b345a73444394781ca/c22eabc72d49adf9a9fb8bdc4e7e770ea7c948f121c23abda901c680d50f4802/freezer.state
	I0108 20:31:28.259403  736205 api_server.go:204] freezer state: "THAWED"
	I0108 20:31:28.259434  736205 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 20:31:28.268378  736205 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 20:31:28.268408  736205 status.go:421] multinode-308765 apiserver status = Running (err=<nil>)
	I0108 20:31:28.268462  736205 status.go:257] multinode-308765 status: &{Name:multinode-308765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:31:28.268485  736205 status.go:255] checking status of multinode-308765-m02 ...
	I0108 20:31:28.268823  736205 cli_runner.go:164] Run: docker container inspect multinode-308765-m02 --format={{.State.Status}}
	I0108 20:31:28.286820  736205 status.go:330] multinode-308765-m02 host status = "Running" (err=<nil>)
	I0108 20:31:28.286847  736205 host.go:66] Checking if "multinode-308765-m02" exists ...
	I0108 20:31:28.287146  736205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-308765-m02
	I0108 20:31:28.305499  736205 host.go:66] Checking if "multinode-308765-m02" exists ...
	I0108 20:31:28.305830  736205 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:31:28.305884  736205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-308765-m02
	I0108 20:31:28.324446  736205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/17907-649468/.minikube/machines/multinode-308765-m02/id_rsa Username:docker}
	I0108 20:31:28.427644  736205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:31:28.443233  736205 status.go:257] multinode-308765-m02 status: &{Name:multinode-308765-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:31:28.443270  736205 status.go:255] checking status of multinode-308765-m03 ...
	I0108 20:31:28.443586  736205 cli_runner.go:164] Run: docker container inspect multinode-308765-m03 --format={{.State.Status}}
	I0108 20:31:28.462763  736205 status.go:330] multinode-308765-m03 host status = "Stopped" (err=<nil>)
	I0108 20:31:28.462789  736205 status.go:343] host is not running, skipping remaining checks
	I0108 20:31:28.462803  736205 status.go:257] multinode-308765-m03 status: &{Name:multinode-308765-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-308765 node start m03 --alsologtostderr: (11.731958157s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-308765
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-308765
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-308765: (25.18466248s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-308765 --wait=true -v=8 --alsologtostderr
E0108 20:32:26.515520  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:32:46.391174  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:32:54.200937  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-308765 --wait=true -v=8 --alsologtostderr: (1m34.62859518s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-308765
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 node delete m03
E0108 20:33:43.201556  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-308765 node delete m03: (4.519810174s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 stop
E0108 20:34:09.440657  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-308765 stop: (24.138080308s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-308765 status: exit status 7 (106.102179ms)

                                                
                                                
-- stdout --
	multinode-308765
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-308765-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr: exit status 7 (110.857873ms)

                                                
                                                
-- stdout --
	multinode-308765
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-308765-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:34:10.677152  745050 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:34:10.677308  745050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:34:10.677317  745050 out.go:309] Setting ErrFile to fd 2...
	I0108 20:34:10.677324  745050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:34:10.677599  745050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:34:10.677796  745050 out.go:303] Setting JSON to false
	I0108 20:34:10.677881  745050 mustload.go:65] Loading cluster: multinode-308765
	I0108 20:34:10.677948  745050 notify.go:220] Checking for updates...
	I0108 20:34:10.678295  745050 config.go:182] Loaded profile config "multinode-308765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:34:10.678306  745050 status.go:255] checking status of multinode-308765 ...
	I0108 20:34:10.679191  745050 cli_runner.go:164] Run: docker container inspect multinode-308765 --format={{.State.Status}}
	I0108 20:34:10.698982  745050 status.go:330] multinode-308765 host status = "Stopped" (err=<nil>)
	I0108 20:34:10.699007  745050 status.go:343] host is not running, skipping remaining checks
	I0108 20:34:10.699020  745050 status.go:257] multinode-308765 status: &{Name:multinode-308765 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:34:10.699044  745050 status.go:255] checking status of multinode-308765-m02 ...
	I0108 20:34:10.699362  745050 cli_runner.go:164] Run: docker container inspect multinode-308765-m02 --format={{.State.Status}}
	I0108 20:34:10.717385  745050 status.go:330] multinode-308765-m02 host status = "Stopped" (err=<nil>)
	I0108 20:34:10.717411  745050 status.go:343] host is not running, skipping remaining checks
	I0108 20:34:10.717419  745050 status.go:257] multinode-308765-m02 status: &{Name:multinode-308765-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-308765 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-308765 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.911330363s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-308765 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-308765
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-308765-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-308765-m02 --driver=docker  --container-runtime=containerd: exit status 14 (103.971701ms)

                                                
                                                
-- stdout --
	* [multinode-308765-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-308765-m02' is duplicated with machine name 'multinode-308765-m02' in profile 'multinode-308765'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-308765-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-308765-m03 --driver=docker  --container-runtime=containerd: (31.870668528s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-308765
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-308765: exit status 80 (581.137227ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-308765
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-308765-m03 already exists in multinode-308765-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-308765-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-308765-m03: (2.029672488s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.68s)

                                                
                                    
x
+
TestPreload (178.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-062460 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-062460 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.867358171s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-062460 image pull gcr.io/k8s-minikube/busybox
E0108 20:37:26.515496  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-062460 image pull gcr.io/k8s-minikube/busybox: (1.503132415s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-062460
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-062460: (12.024870391s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-062460 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0108 20:37:46.391015  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:38:43.201775  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-062460 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m26.574853885s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-062460 image list
helpers_test.go:175: Cleaning up "test-preload-062460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-062460
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-062460: (2.37990969s)
--- PASS: TestPreload (178.61s)

                                                
                                    
x
+
TestScheduledStopUnix (110.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-055386 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-055386 --memory=2048 --driver=docker  --container-runtime=containerd: (33.452378729s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-055386 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-055386 -n scheduled-stop-055386
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-055386 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-055386 --cancel-scheduled
E0108 20:40:06.244542  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-055386 -n scheduled-stop-055386
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-055386
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-055386 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-055386
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-055386: exit status 7 (109.380101ms)

                                                
                                                
-- stdout --
	scheduled-stop-055386
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-055386 -n scheduled-stop-055386
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-055386 -n scheduled-stop-055386: exit status 7 (93.001654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-055386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-055386
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-055386: (5.241492957s)
--- PASS: TestScheduledStopUnix (110.64s)

                                                
                                    
x
+
TestInsufficientStorage (13.9s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-581822 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-581822 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (11.290137227s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8cf5fd71-eca3-48f8-a159-ea8a9e336881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-581822] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"48308fab-89fb-4433-ae6e-2cd2ab6d674d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"42b694f1-716a-4a3f-aab5-967d1184007b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1cbea812-308f-45e5-85de-a5210ab3c705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig"}}
	{"specversion":"1.0","id":"bb8224b7-b53b-4950-a38d-8335ab5d9e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube"}}
	{"specversion":"1.0","id":"1ec70088-96d3-4fe9-9fcc-92155b0095a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f7994015-5832-4913-8160-cff81ea60359","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dadd6a95-5eb2-49f5-bd8c-9390a1d1986c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2a99e589-e9c5-4b17-968a-7e48bebc1b65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"df6d5b05-6fed-4b49-94f1-bd822533cb05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"27dae462-f348-4516-b041-99057b792e33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5c03412e-0a27-4004-8f15-1dd9015e8105","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-581822 in cluster insufficient-storage-581822","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"68cd6e1d-64fd-4fbf-9441-0060a385052f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dccb3c5-6a30-4ba8-b575-257497021dba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ae79b11-592d-4219-8ffa-cc77aa17e31e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-581822 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-581822 --output=json --layout=cluster: exit status 7 (327.463894ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-581822","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-581822","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:41:11.015792  762523 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-581822" does not appear in /home/jenkins/minikube-integration/17907-649468/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-581822 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-581822 --output=json --layout=cluster: exit status 7 (337.459465ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-581822","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-581822","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:41:11.354061  762576 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-581822" does not appear in /home/jenkins/minikube-integration/17907-649468/kubeconfig
	E0108 20:41:11.366927  762576 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/insufficient-storage-581822/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-581822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-581822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-581822: (1.942961974s)
--- PASS: TestInsufficientStorage (13.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.2571288587.exe start -p running-upgrade-165219 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.2571288587.exe start -p running-upgrade-165219 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.513871252s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-165219 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-165219 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.720944712s)
helpers_test.go:175: Cleaning up "running-upgrade-165219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-165219
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-165219: (3.169029314s)
--- PASS: TestRunningBinaryUpgrade (87.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (375.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.590875189s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-205182
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-205182: (1.398987554s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-205182 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-205182 status --format={{.Host}}: exit status 7 (117.914261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0108 20:43:43.201915  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:43:49.561306  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.312263508s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-205182 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (105.787808ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-205182] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-205182
	    minikube start -p kubernetes-upgrade-205182 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2051822 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-205182 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0108 20:48:43.202173  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-205182 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.956533869s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-205182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-205182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-205182: (2.424060582s)
--- PASS: TestKubernetesUpgrade (375.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.2s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.890289097.exe start -p missing-upgrade-542172 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.890289097.exe start -p missing-upgrade-542172 --memory=2200 --driver=docker  --container-runtime=containerd: (1m28.159428448s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-542172
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-542172: (1.964412149s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-542172
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-542172 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0108 20:42:46.390790  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-542172 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.824344466s)
helpers_test.go:175: Cleaning up "missing-upgrade-542172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-542172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-542172: (2.834655571s)
--- PASS: TestMissingContainerUpgrade (166.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910474 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-910474 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.652299ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-910474] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910474 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910474 --driver=docker  --container-runtime=containerd: (39.826652546s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-910474 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910474 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910474 --no-kubernetes --driver=docker  --container-runtime=containerd: (18.192166494s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-910474 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-910474 status -o json: exit status 2 (468.05985ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-910474","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-910474
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-910474: (2.084998945s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910474 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910474 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.930346804s)
--- PASS: TestNoKubernetes/serial/Start (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-910474 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-910474 "sudo systemctl is-active --quiet service kubelet": exit status 1 (390.223875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-910474
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-910474: (1.305013341s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910474 --driver=docker  --container-runtime=containerd
E0108 20:42:26.515606  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910474 --driver=docker  --container-runtime=containerd: (7.727128708s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-910474 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-910474 "sudo systemctl is-active --quiet service kubelet": exit status 1 (390.24484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.287558360.exe start -p stopped-upgrade-307765 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.287558360.exe start -p stopped-upgrade-307765 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.034764628s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.287558360.exe -p stopped-upgrade-307765 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.287558360.exe -p stopped-upgrade-307765 stop: (20.09361314s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-307765 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-307765 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.514623987s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-307765
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-307765: (1.179202535s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (83.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-280663 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0108 20:47:26.516386  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:47:46.390749  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-280663 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.0699596s)
--- PASS: TestPause/serial/Start (83.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-280663 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-280663 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.761996717s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.79s)

                                                
                                    
x
+
TestPause/serial/Pause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-280663 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-280663 --alsologtostderr -v=5: (1.088783038s)
--- PASS: TestPause/serial/Pause (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-280663 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-280663 --output=json --layout=cluster: exit status 2 (425.877944ms)

                                                
                                                
-- stdout --
	{"Name":"pause-280663","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-280663","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-280663 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-280663 --alsologtostderr -v=5: (1.088304046s)
--- PASS: TestPause/serial/Unpause (1.09s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-280663 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-280663 --alsologtostderr -v=5: (1.088608726s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-280663 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-280663 --alsologtostderr -v=5: (3.504148491s)
--- PASS: TestPause/serial/DeletePaused (3.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-280663
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-280663: exit status 1 (23.277573ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-280663: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-949157 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-949157 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (295.137334ms)

                                                
                                                
-- stdout --
	* [false-949157] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:49:06.400625  799514 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:49:06.400908  799514 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:49:06.400938  799514 out.go:309] Setting ErrFile to fd 2...
	I0108 20:49:06.400958  799514 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:49:06.401274  799514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-649468/.minikube/bin
	I0108 20:49:06.401815  799514 out.go:303] Setting JSON to false
	I0108 20:49:06.402839  799514 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12687,"bootTime":1704734260,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 20:49:06.402956  799514 start.go:138] virtualization:  
	I0108 20:49:06.407885  799514 out.go:177] * [false-949157] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:49:06.410057  799514 notify.go:220] Checking for updates...
	I0108 20:49:06.413158  799514 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:49:06.415881  799514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:49:06.417817  799514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-649468/kubeconfig
	I0108 20:49:06.419613  799514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-649468/.minikube
	I0108 20:49:06.421538  799514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:49:06.423586  799514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:49:06.426447  799514 config.go:182] Loaded profile config "force-systemd-flag-904652": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 20:49:06.426607  799514 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:49:06.455679  799514 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:49:06.455805  799514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:49:06.586707  799514 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:49:06.571629987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:49:06.586870  799514 docker.go:295] overlay module found
	I0108 20:49:06.589207  799514 out.go:177] * Using the docker driver based on user configuration
	I0108 20:49:06.591202  799514 start.go:298] selected driver: docker
	I0108 20:49:06.591267  799514 start.go:902] validating driver "docker" against <nil>
	I0108 20:49:06.591295  799514 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:49:06.593978  799514 out.go:177] 
	W0108 20:49:06.595828  799514 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0108 20:49:06.597577  799514 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-949157 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-949157" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-949157

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949157"

                                                
                                                
----------------------- debugLogs end: false-949157 [took: 4.848393502s] --------------------------------
helpers_test.go:175: Cleaning up "false-949157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-949157
--- PASS: TestNetworkPlugins/group/false (5.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-437020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0108 20:50:49.441805  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:52:26.515693  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:52:46.390581  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-437020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m8.707192649s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-437020 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c049e37f-9a85-4c41-bfdc-01d105610ef7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c049e37f-9a85-4c41-bfdc-01d105610ef7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003397316s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-437020 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-437020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-437020 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-437020 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-437020 --alsologtostderr -v=3: (12.155772201s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-437020 -n old-k8s-version-437020
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-437020 -n old-k8s-version-437020: exit status 7 (97.314729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-437020 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (665.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-437020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-437020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m4.593896704s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-437020 -n old-k8s-version-437020
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (665.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-550984 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 20:53:43.201439  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-550984 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m13.715166719s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-550984 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5cef4fd0-3880-467d-85b6-2ad67770bdfb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5cef4fd0-3880-467d-85b6-2ad67770bdfb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004349967s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-550984 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-550984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-550984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041413632s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-550984 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-550984 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-550984 --alsologtostderr -v=3: (12.142904007s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-550984 -n no-preload-550984
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-550984 -n no-preload-550984: exit status 7 (99.21921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-550984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (344.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-550984 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 20:56:46.244744  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 20:57:26.515767  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 20:57:46.391270  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 20:58:43.201939  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
E0108 21:00:29.561529  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-550984 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m43.682926142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-550984 -n no-preload-550984
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (344.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2v9kp" [88764fde-c5c4-412c-96e2-48efae891a4d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2v9kp" [88764fde-c5c4-412c-96e2-48efae891a4d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.071030602s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2v9kp" [88764fde-c5c4-412c-96e2-48efae891a4d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003805663s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-550984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-550984 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-550984 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-550984 -n no-preload-550984
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-550984 -n no-preload-550984: exit status 2 (368.642362ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-550984 -n no-preload-550984
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-550984 -n no-preload-550984: exit status 2 (375.465232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-550984 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-550984 -n no-preload-550984
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-550984 -n no-preload-550984
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-576020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-576020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m1.176952484s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-576020 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e3604550-e989-4b5c-bdf8-168dded0677f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e3604550-e989-4b5c-bdf8-168dded0677f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004227013s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-576020 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-576020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-576020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.14165568s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-576020 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-576020 --alsologtostderr -v=3
E0108 21:02:26.515543  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-576020 --alsologtostderr -v=3: (12.230555246s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-576020 -n embed-certs-576020
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-576020 -n embed-certs-576020: exit status 7 (89.929613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-576020 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (342.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-576020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0108 21:02:46.390780  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 21:03:43.202193  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-576020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m41.794009852s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-576020 -n embed-certs-576020
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (342.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rxnrc" [9b60f905-854b-48fc-a1d1-d71274e37720] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003788394s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rxnrc" [9b60f905-854b-48fc-a1d1-d71274e37720] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00370632s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-437020 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-437020 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-437020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-437020 -n old-k8s-version-437020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-437020 -n old-k8s-version-437020: exit status 2 (412.16265ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-437020 -n old-k8s-version-437020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-437020 -n old-k8s-version-437020: exit status 2 (367.651045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-437020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-437020 -n old-k8s-version-437020
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-437020 -n old-k8s-version-437020
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-888465 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0108 21:04:41.295907  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.301231  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.311572  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.331844  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.372203  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.452558  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.612973  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:41.933634  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:42.574616  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:43.855141  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:46.415561  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:04:51.536459  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:05:01.776957  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:05:22.257867  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-888465 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m27.257809503s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-888465 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3c45674b-a126-4fb6-b75b-064d74fedddb] Pending
helpers_test.go:344: "busybox" [3c45674b-a126-4fb6-b75b-064d74fedddb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 21:06:03.218912  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3c45674b-a126-4fb6-b75b-064d74fedddb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004044864s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-888465 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-888465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-888465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.166339812s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-888465 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-888465 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-888465 --alsologtostderr -v=3: (12.22134393s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465: exit status 7 (90.641884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-888465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-888465 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0108 21:07:25.140091  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
E0108 21:07:26.516396  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
E0108 21:07:29.441998  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 21:07:46.390812  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 21:07:47.712414  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:47.717618  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:47.727978  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:47.748178  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:47.788411  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:47.868652  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:48.029376  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:48.350085  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:48.990960  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:50.271164  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:52.831363  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:07:57.951985  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:08:08.192756  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-888465 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m45.782228614s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mbmm2" [b8e910c3-4ceb-4d8b-a306-41b2123ac655] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mbmm2" [b8e910c3-4ceb-4d8b-a306-41b2123ac655] Running
E0108 21:08:28.673175  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004784183s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mbmm2" [b8e910c3-4ceb-4d8b-a306-41b2123ac655] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004400183s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-576020 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-576020 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-576020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-576020 -n embed-certs-576020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-576020 -n embed-certs-576020: exit status 2 (382.363378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-576020 -n embed-certs-576020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-576020 -n embed-certs-576020: exit status 2 (376.412731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-576020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-576020 -n embed-certs-576020
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-576020 -n embed-certs-576020
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-118917 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 21:09:09.633400  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-118917 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (48.36246598s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-118917 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-118917 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.21799319s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-118917 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-118917 --alsologtostderr -v=3: (1.277989182s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-118917 -n newest-cni-118917
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-118917 -n newest-cni-118917: exit status 7 (113.150249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-118917 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-118917 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 21:09:41.295000  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-118917 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (31.241223693s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-118917 -n newest-cni-118917
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-118917 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-118917 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-118917 -n newest-cni-118917
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-118917 -n newest-cni-118917: exit status 2 (379.522378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-118917 -n newest-cni-118917
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-118917 -n newest-cni-118917: exit status 2 (393.026073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-118917 --alsologtostderr -v=1
E0108 21:10:08.981231  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/no-preload-550984/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-118917 -n newest-cni-118917
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-118917 -n newest-cni-118917
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0108 21:10:31.554151  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m0.762870171s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-949157 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g7fmc" [4f13c094-611e-4a83-9163-852d601f3a01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g7fmc" [4f13c094-611e-4a83-9163-852d601f3a01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003593072s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-949157 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m28.538017262s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s952r" [0a9f2de8-1b5f-456a-86ac-f362fd18aeed] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s952r" [0a9f2de8-1b5f-456a-86ac-f362fd18aeed] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005236727s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s952r" [0a9f2de8-1b5f-456a-86ac-f362fd18aeed] Running
E0108 21:12:26.515626  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/ingress-addon-legacy-918006/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004836464s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-888465 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-888465 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-888465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-888465 --alsologtostderr -v=1: (1.128472209s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465: exit status 2 (427.383084ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465: exit status 2 (438.114885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-888465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-888465 -n default-k8s-diff-port-888465
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)
E0108 21:17:36.462039  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:17:46.390938  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 21:17:47.712655  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0108 21:12:46.391124  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/addons-241374/client.crt: no such file or directory
E0108 21:12:47.712067  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
E0108 21:13:15.395151  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/old-k8s-version-437020/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m18.843838021s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-slp67" [7e85d100-5cbb-40ca-9e0e-cf1620b3f057] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005319305s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-949157 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m8rq9" [6e18cd92-e16b-47a7-a9d4-4cc002555013] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:13:26.244947  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/functional-819954/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-m8rq9" [6e18cd92-e16b-47a7-a9d4-4cc002555013] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003622699s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-949157 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ccchl" [4acb7958-e83b-45da-a0bc-895e8b9e1059] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.023170951s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.648140282s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-949157 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pm567" [72629e52-2b55-4628-8c78-aada84bf3a62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pm567" [72629e52-2b55-4628-8c78-aada84bf3a62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004968231s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-949157 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m27.856039274s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-949157 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q94qj" [384edd30-52ce-474a-83a1-b54fab21d6e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q94qj" [384edd30-52ce-474a-83a1-b54fab21d6e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004583402s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-949157 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0108 21:16:00.737469  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:00.742839  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:00.753742  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:00.774011  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:00.814271  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:00.894550  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:01.054897  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:01.375417  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:02.016097  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:03.296327  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:05.857407  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
E0108 21:16:10.978334  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (57.961583988s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-949157 replace --force -f testdata/netcat-deployment.yaml
E0108 21:16:14.536298  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:14.541768  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:14.552083  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:14.573042  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:14.616004  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:14.696290  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jsg4g" [48145d77-fe75-432f-9755-429404de195c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:16:14.856645  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:15.177409  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:15.818252  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:17.098934  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
E0108 21:16:19.659582  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-jsg4g" [48145d77-fe75-432f-9755-429404de195c] Running
E0108 21:16:21.218537  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/default-k8s-diff-port-888465/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00433742s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-949157 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:16:24.780276  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pqmld" [71294951-cb2b-46dc-b151-f4e19fc68d7a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004248183s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-949157 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m27.617379829s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-949157 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fbx9n" [f3cd896a-cc4c-4b6d-a4d4-471803dbe0c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fbx9n" [f3cd896a-cc4c-4b6d-a4d4-471803dbe0c7] Running
E0108 21:16:55.501104  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/auto-949157/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00469693s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-949157 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-949157 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-949157 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cg7j5" [1d0e6687-ccac-4dc6-83d4-81875f693d42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:18:18.591366  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:18.596795  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:18.607058  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:18.627315  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:18.667747  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:18.748033  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:18.908493  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:19.228695  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:19.871763  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
E0108 21:18:21.152661  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cg7j5" [1d0e6687-ccac-4dc6-83d4-81875f693d42] Running
E0108 21:18:23.713534  654805 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-649468/.minikube/profiles/kindnet-949157/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004862418s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-949157 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-949157 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-787864 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-787864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-787864
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-576973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-576973
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-949157 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-949157" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-949157

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949157"

                                                
                                                
----------------------- debugLogs end: kubenet-949157 [took: 5.067438042s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-949157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-949157
--- SKIP: TestNetworkPlugins/group/kubenet (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-949157 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-949157" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-949157

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-949157" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949157"

                                                
                                                
----------------------- debugLogs end: cilium-949157 [took: 5.803137575s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-949157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-949157
--- SKIP: TestNetworkPlugins/group/cilium (6.13s)

                                                
                                    
Copied to clipboard