Test Report: Docker_Linux_containerd_arm64 17764

                    
                      47aff3550d8f737faf92680522e584556adb8789:2023-12-12:32246
                    
                

Test fail (18/315)

x
+
TestAddons/parallel/Ingress (38.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-004867 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-004867 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-004867 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e2926d54-d485-41a1-be2a-bf3ed5c3b232] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e2926d54-d485-41a1-be2a-bf3ed5c3b232] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010361084s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-004867 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.050283093s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-004867 addons disable ingress-dns --alsologtostderr -v=1: (1.176772722s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-004867 addons disable ingress --alsologtostderr -v=1: (7.756727653s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-004867
helpers_test.go:235: (dbg) docker inspect addons-004867:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b55a2fba7e473ef74406a64b6b8277fcbf0c39476d0ff9e16242956f1e9b81e",
	        "Created": "2023-12-12T00:12:13.27852874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1142331,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:12:13.59684776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/0b55a2fba7e473ef74406a64b6b8277fcbf0c39476d0ff9e16242956f1e9b81e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b55a2fba7e473ef74406a64b6b8277fcbf0c39476d0ff9e16242956f1e9b81e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b55a2fba7e473ef74406a64b6b8277fcbf0c39476d0ff9e16242956f1e9b81e/hosts",
	        "LogPath": "/var/lib/docker/containers/0b55a2fba7e473ef74406a64b6b8277fcbf0c39476d0ff9e16242956f1e9b81e/0b55a2fba7e473ef74406a64b6b8277fcbf0c39476d0ff9e16242956f1e9b81e-json.log",
	        "Name": "/addons-004867",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-004867:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-004867",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bacf648a6df1dd5a6240ddd42c8a9be1cb9662c05a58913fc8a3c784af8da70c-init/diff:/var/lib/docker/overlay2/83f94b9f515065f4cf4d4337d1fbe3fc13b585131a89a52ad8eb2b6bf7d119ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bacf648a6df1dd5a6240ddd42c8a9be1cb9662c05a58913fc8a3c784af8da70c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bacf648a6df1dd5a6240ddd42c8a9be1cb9662c05a58913fc8a3c784af8da70c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bacf648a6df1dd5a6240ddd42c8a9be1cb9662c05a58913fc8a3c784af8da70c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-004867",
	                "Source": "/var/lib/docker/volumes/addons-004867/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-004867",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-004867",
	                "name.minikube.sigs.k8s.io": "addons-004867",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "033cfe9c98c88fb2e55a0dea600fdbd8a1f5dec99a9b3cf0859ca931a0749a53",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34027"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34024"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34026"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34025"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/033cfe9c98c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-004867": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0b55a2fba7e4",
	                        "addons-004867"
	                    ],
	                    "NetworkID": "7acb10f46b86c6ff7fde48134807feb70c0b4828faea72ae15b8aac3e00d6b10",
	                    "EndpointID": "4f25099f82baa425968f5ab17636afc8342150fb550b59d0220c541bbab95c72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-004867 -n addons-004867
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-004867 logs -n 25: (1.647921133s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| delete  | -p download-only-570176                                                                     | download-only-570176   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| delete  | -p download-only-570176                                                                     | download-only-570176   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| start   | --download-only -p                                                                          | download-docker-412876 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | download-docker-412876                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-412876                                                                   | download-docker-412876 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-472349   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | binary-mirror-472349                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36139                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-472349                                                                     | binary-mirror-472349   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| addons  | disable dashboard -p                                                                        | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | addons-004867                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | addons-004867                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-004867 --wait=true                                                                | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-004867 ip                                                                            | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	| addons  | addons-004867 addons disable                                                                | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | -p addons-004867                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-004867 ssh cat                                                                       | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | /opt/local-path-provisioner/pvc-709bb9e2-1272-4f29-8b35-92ea026ee6d1_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-004867 addons disable                                                                | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-004867 addons                                                                        | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-004867 addons                                                                        | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | addons-004867                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | -p addons-004867                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-004867 addons                                                                        | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | addons-004867                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-004867 ssh curl -s                                                                   | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-004867 ip                                                                            | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:15 UTC | 12 Dec 23 00:15 UTC |
	| addons  | addons-004867 addons disable                                                                | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:16 UTC | 12 Dec 23 00:16 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-004867 addons disable                                                                | addons-004867          | jenkins | v1.32.0 | 12 Dec 23 00:16 UTC | 12 Dec 23 00:16 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:50.035623 1141875 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:50.035882 1141875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:50.035913 1141875 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:50.035935 1141875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:50.036287 1141875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:11:50.036861 1141875 out.go:303] Setting JSON to false
	I1212 00:11:50.037860 1141875 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24857,"bootTime":1702315053,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:11:50.037985 1141875 start.go:138] virtualization:  
	I1212 00:11:50.040812 1141875 out.go:177] * [addons-004867] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:50.043710 1141875 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:11:50.043908 1141875 notify.go:220] Checking for updates...
	I1212 00:11:50.045955 1141875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:50.048394 1141875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:11:50.050290 1141875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:11:50.052876 1141875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:11:50.055040 1141875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:11:50.057245 1141875 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:11:50.081516 1141875 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:50.081651 1141875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:50.173587 1141875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-12 00:11:50.16237817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:50.173791 1141875 docker.go:295] overlay module found
	I1212 00:11:50.176241 1141875 out.go:177] * Using the docker driver based on user configuration
	I1212 00:11:50.178978 1141875 start.go:298] selected driver: docker
	I1212 00:11:50.179005 1141875 start.go:902] validating driver "docker" against <nil>
	I1212 00:11:50.179022 1141875 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:11:50.179712 1141875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:50.247770 1141875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-12 00:11:50.238476784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:50.247932 1141875 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:11:50.248187 1141875 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:11:50.250157 1141875 out.go:177] * Using Docker driver with root privileges
	I1212 00:11:50.252659 1141875 cni.go:84] Creating CNI manager for ""
	I1212 00:11:50.252682 1141875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:11:50.252693 1141875 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:11:50.252707 1141875 start_flags.go:323] config:
	{Name:addons-004867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-004867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:50.255239 1141875 out.go:177] * Starting control plane node addons-004867 in cluster addons-004867
	I1212 00:11:50.258052 1141875 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:11:50.261035 1141875 out.go:177] * Pulling base image ...
	I1212 00:11:50.263121 1141875 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:11:50.263174 1141875 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1212 00:11:50.263186 1141875 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:50.263199 1141875 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:50.263279 1141875 preload.go:174] Found /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:11:50.263290 1141875 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1212 00:11:50.263680 1141875 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/config.json ...
	I1212 00:11:50.263712 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/config.json: {Name:mkf35d26e2e92df82aaf8c169346847ac5fe303a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:50.280397 1141875 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:50.280556 1141875 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:50.280582 1141875 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:50.280590 1141875 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:50.280599 1141875 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:50.280605 1141875 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 from local cache
	I1212 00:12:06.239713 1141875 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 from cached tarball
	I1212 00:12:06.239746 1141875 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:12:06.239800 1141875 start.go:365] acquiring machines lock for addons-004867: {Name:mk30adeca9e69543d9f92590732edf33382f91ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:12:06.240387 1141875 start.go:369] acquired machines lock for "addons-004867" in 561.777µs
	I1212 00:12:06.240426 1141875 start.go:93] Provisioning new machine with config: &{Name:addons-004867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-004867 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:12:06.240517 1141875 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:12:06.242760 1141875 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1212 00:12:06.243009 1141875 start.go:159] libmachine.API.Create for "addons-004867" (driver="docker")
	I1212 00:12:06.243042 1141875 client.go:168] LocalClient.Create starting
	I1212 00:12:06.243165 1141875 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem
	I1212 00:12:06.378165 1141875 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem
	I1212 00:12:06.604411 1141875 cli_runner.go:164] Run: docker network inspect addons-004867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:12:06.624675 1141875 cli_runner.go:211] docker network inspect addons-004867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:12:06.624763 1141875 network_create.go:281] running [docker network inspect addons-004867] to gather additional debugging logs...
	I1212 00:12:06.624786 1141875 cli_runner.go:164] Run: docker network inspect addons-004867
	W1212 00:12:06.643341 1141875 cli_runner.go:211] docker network inspect addons-004867 returned with exit code 1
	I1212 00:12:06.643371 1141875 network_create.go:284] error running [docker network inspect addons-004867]: docker network inspect addons-004867: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-004867 not found
	I1212 00:12:06.643383 1141875 network_create.go:286] output of [docker network inspect addons-004867]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-004867 not found
	
	** /stderr **
	I1212 00:12:06.643488 1141875 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:12:06.661166 1141875 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024f3100}
	I1212 00:12:06.661205 1141875 network_create.go:124] attempt to create docker network addons-004867 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:12:06.661263 1141875 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-004867 addons-004867
	I1212 00:12:06.749684 1141875 network_create.go:108] docker network addons-004867 192.168.49.0/24 created
	I1212 00:12:06.749718 1141875 kic.go:121] calculated static IP "192.168.49.2" for the "addons-004867" container
	I1212 00:12:06.749801 1141875 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:12:06.768235 1141875 cli_runner.go:164] Run: docker volume create addons-004867 --label name.minikube.sigs.k8s.io=addons-004867 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:12:06.790073 1141875 oci.go:103] Successfully created a docker volume addons-004867
	I1212 00:12:06.790182 1141875 cli_runner.go:164] Run: docker run --rm --name addons-004867-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-004867 --entrypoint /usr/bin/test -v addons-004867:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:12:08.939467 1141875 cli_runner.go:217] Completed: docker run --rm --name addons-004867-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-004867 --entrypoint /usr/bin/test -v addons-004867:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib: (2.149241698s)
	I1212 00:12:08.939512 1141875 oci.go:107] Successfully prepared a docker volume addons-004867
	I1212 00:12:08.939542 1141875 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:12:08.939560 1141875 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:12:08.939645 1141875 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-004867:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:12:13.197539 1141875 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-004867:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.257836653s)
	I1212 00:12:13.197572 1141875 kic.go:203] duration metric: took 4.258009 seconds to extract preloaded images to volume
	W1212 00:12:13.197766 1141875 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:12:13.197885 1141875 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:12:13.262215 1141875 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-004867 --name addons-004867 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-004867 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-004867 --network addons-004867 --ip 192.168.49.2 --volume addons-004867:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:12:13.607763 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Running}}
	I1212 00:12:13.638297 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:13.671692 1141875 cli_runner.go:164] Run: docker exec addons-004867 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:12:13.764651 1141875 oci.go:144] the created container "addons-004867" has a running status.
	I1212 00:12:13.764678 1141875 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa...
	I1212 00:12:13.990718 1141875 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:12:14.019780 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:14.049194 1141875 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:12:14.049219 1141875 kic_runner.go:114] Args: [docker exec --privileged addons-004867 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:12:14.145769 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:14.170769 1141875 machine.go:88] provisioning docker machine ...
	I1212 00:12:14.170798 1141875 ubuntu.go:169] provisioning hostname "addons-004867"
	I1212 00:12:14.170863 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:14.213021 1141875 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.213433 1141875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I1212 00:12:14.213447 1141875 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-004867 && echo "addons-004867" | sudo tee /etc/hostname
	I1212 00:12:14.214005 1141875 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39576->127.0.0.1:34028: read: connection reset by peer
	I1212 00:12:17.371645 1141875 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-004867
	
	I1212 00:12:17.371842 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:17.391338 1141875 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:17.391765 1141875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I1212 00:12:17.391791 1141875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-004867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-004867/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-004867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:12:17.532638 1141875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:12:17.532668 1141875 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1135857/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1135857/.minikube}
	I1212 00:12:17.532705 1141875 ubuntu.go:177] setting up certificates
	I1212 00:12:17.532714 1141875 provision.go:83] configureAuth start
	I1212 00:12:17.532780 1141875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-004867
	I1212 00:12:17.551901 1141875 provision.go:138] copyHostCerts
	I1212 00:12:17.551993 1141875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem (1078 bytes)
	I1212 00:12:17.552132 1141875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem (1123 bytes)
	I1212 00:12:17.552211 1141875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem (1675 bytes)
	I1212 00:12:17.552272 1141875 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem org=jenkins.addons-004867 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-004867]
	I1212 00:12:17.747152 1141875 provision.go:172] copyRemoteCerts
	I1212 00:12:17.747230 1141875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:12:17.747278 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:17.765219 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:17.866429 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:12:17.896166 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 00:12:17.926760 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:12:17.955204 1141875 provision.go:86] duration metric: configureAuth took 422.471388ms
	I1212 00:12:17.955231 1141875 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:12:17.955455 1141875 config.go:182] Loaded profile config "addons-004867": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:12:17.955466 1141875 machine.go:91] provisioned docker machine in 3.784680275s
	I1212 00:12:17.955473 1141875 client.go:171] LocalClient.Create took 11.712422377s
	I1212 00:12:17.955485 1141875 start.go:167] duration metric: libmachine.API.Create for "addons-004867" took 11.712477383s
	I1212 00:12:17.955500 1141875 start.go:300] post-start starting for "addons-004867" (driver="docker")
	I1212 00:12:17.955511 1141875 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:12:17.955567 1141875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:12:17.955616 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:17.974092 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:18.079558 1141875 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:12:18.084196 1141875 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:12:18.084237 1141875 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:12:18.084274 1141875 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:12:18.084289 1141875 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:12:18.084301 1141875 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/addons for local assets ...
	I1212 00:12:18.084396 1141875 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/files for local assets ...
	I1212 00:12:18.084447 1141875 start.go:303] post-start completed in 128.939068ms
	I1212 00:12:18.084807 1141875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-004867
	I1212 00:12:18.104577 1141875 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/config.json ...
	I1212 00:12:18.104879 1141875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:12:18.104947 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:18.127053 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:18.229998 1141875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:12:18.236477 1141875 start.go:128] duration metric: createHost completed in 11.995943419s
	I1212 00:12:18.236503 1141875 start.go:83] releasing machines lock for "addons-004867", held for 11.996096903s
	I1212 00:12:18.236576 1141875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-004867
	I1212 00:12:18.256117 1141875 ssh_runner.go:195] Run: cat /version.json
	I1212 00:12:18.256168 1141875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:12:18.256176 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:18.256238 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:18.288807 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:18.292572 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:18.523616 1141875 ssh_runner.go:195] Run: systemctl --version
	I1212 00:12:18.529473 1141875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:12:18.535214 1141875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1212 00:12:18.566918 1141875 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:12:18.567016 1141875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:12:18.603594 1141875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:12:18.603622 1141875 start.go:475] detecting cgroup driver to use...
	I1212 00:12:18.603654 1141875 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:12:18.603710 1141875 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:12:18.618537 1141875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:12:18.632684 1141875 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:12:18.632753 1141875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:12:18.648791 1141875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:12:18.666015 1141875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:12:18.768326 1141875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:12:18.875245 1141875 docker.go:219] disabling docker service ...
	I1212 00:12:18.875351 1141875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:12:18.898012 1141875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:12:18.912559 1141875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:12:19.010553 1141875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:12:19.109159 1141875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:12:19.122610 1141875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:12:19.143725 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:12:19.156005 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:12:19.168112 1141875 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:12:19.168219 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:12:19.180803 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:12:19.193451 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:12:19.205645 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:12:19.217429 1141875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:12:19.228948 1141875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:12:19.241297 1141875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:12:19.251496 1141875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:12:19.261908 1141875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:12:19.364235 1141875 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:12:19.505488 1141875 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:12:19.505605 1141875 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:12:19.510948 1141875 start.go:543] Will wait 60s for crictl version
	I1212 00:12:19.511037 1141875 ssh_runner.go:195] Run: which crictl
	I1212 00:12:19.515587 1141875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:12:19.564454 1141875 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1212 00:12:19.564551 1141875 ssh_runner.go:195] Run: containerd --version
	I1212 00:12:19.594623 1141875 ssh_runner.go:195] Run: containerd --version
	I1212 00:12:19.629857 1141875 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I1212 00:12:19.632028 1141875 cli_runner.go:164] Run: docker network inspect addons-004867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:12:19.650040 1141875 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:12:19.654862 1141875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:12:19.668325 1141875 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:12:19.668398 1141875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:12:19.709471 1141875 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:12:19.709497 1141875 containerd.go:518] Images already preloaded, skipping extraction
	I1212 00:12:19.709557 1141875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:12:19.750063 1141875 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:12:19.750087 1141875 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:12:19.750148 1141875 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:12:19.793839 1141875 cni.go:84] Creating CNI manager for ""
	I1212 00:12:19.793865 1141875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:12:19.793895 1141875 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:12:19.793917 1141875 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-004867 NodeName:addons-004867 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:12:19.794056 1141875 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-004867"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:12:19.794129 1141875 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-004867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-004867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:12:19.794193 1141875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:12:19.804914 1141875 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:12:19.804996 1141875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:12:19.815762 1141875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1212 00:12:19.837535 1141875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:12:19.859417 1141875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 00:12:19.881294 1141875 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:12:19.885911 1141875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:12:19.899429 1141875 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867 for IP: 192.168.49.2
	I1212 00:12:19.899463 1141875 certs.go:190] acquiring lock for shared ca certs: {Name:mk518d45f153d561b6d30fa5c8435abd4f573517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.900149 1141875 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key
	I1212 00:12:20.214763 1141875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt ...
	I1212 00:12:20.214798 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt: {Name:mkb0072f46f8060632851d10560eb3a6011cb05c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:20.215002 1141875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key ...
	I1212 00:12:20.215028 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key: {Name:mk1b2dc1a3421ea5d461acc05d371ba6a2844337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:20.215870 1141875 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key
	I1212 00:12:20.873319 1141875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt ...
	I1212 00:12:20.873352 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt: {Name:mk16d4e3921c2cc1d9de6bff8bd5e7727543c1aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:20.873534 1141875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key ...
	I1212 00:12:20.873546 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key: {Name:mke5f431247e4def51b27321ad863c44d4887b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:20.873666 1141875 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.key
	I1212 00:12:20.873683 1141875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt with IP's: []
	I1212 00:12:21.938563 1141875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt ...
	I1212 00:12:21.938602 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: {Name:mkebd53a3af2ad2b738da23ff691f3f2ae3936f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:21.939508 1141875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.key ...
	I1212 00:12:21.939528 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.key: {Name:mkf2f0da479559280a4bd0a723ef3da367dbf6e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:21.940081 1141875 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.key.dd3b5fb2
	I1212 00:12:21.940106 1141875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:12:22.262703 1141875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.crt.dd3b5fb2 ...
	I1212 00:12:22.262735 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.crt.dd3b5fb2: {Name:mk60f73cdf430ba8da8a2710172e30c32bf509a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:22.263534 1141875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.key.dd3b5fb2 ...
	I1212 00:12:22.263552 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.key.dd3b5fb2: {Name:mk575550335d7e36ea2c601c09748e3a1bf5ef00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:22.264040 1141875 certs.go:337] copying /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.crt
	I1212 00:12:22.264124 1141875 certs.go:341] copying /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.key
	I1212 00:12:22.264177 1141875 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.key
	I1212 00:12:22.264198 1141875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.crt with IP's: []
	I1212 00:12:22.516780 1141875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.crt ...
	I1212 00:12:22.516820 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.crt: {Name:mk75a52d73b4b0bbfb863c199ddc8087036ad491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:22.517023 1141875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.key ...
	I1212 00:12:22.517040 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.key: {Name:mkd8b3b57ecb527013d14ad7963c1c93233689d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:22.517231 1141875 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:12:22.517279 1141875 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:12:22.517304 1141875 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:12:22.517363 1141875 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem (1675 bytes)
	I1212 00:12:22.518028 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:12:22.547642 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:12:22.577108 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:12:22.606311 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:12:22.634934 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:12:22.663121 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:12:22.692539 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:12:22.722336 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:12:22.751890 1141875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:12:22.780264 1141875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:12:22.801542 1141875 ssh_runner.go:195] Run: openssl version
	I1212 00:12:22.808598 1141875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:12:22.819963 1141875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:22.824540 1141875 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:22.824652 1141875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:22.833337 1141875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:12:22.844993 1141875 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:12:22.849807 1141875 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:12:22.849855 1141875 kubeadm.go:404] StartCluster: {Name:addons-004867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-004867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:12:22.849977 1141875 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:12:22.850054 1141875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:12:22.892734 1141875 cri.go:89] found id: ""
	I1212 00:12:22.892855 1141875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:12:22.904098 1141875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:12:22.915143 1141875 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:12:22.915243 1141875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:12:22.926504 1141875 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:12:22.926549 1141875 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:12:22.981443 1141875 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 00:12:22.981706 1141875 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:12:23.028258 1141875 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:12:23.028329 1141875 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:12:23.028369 1141875 kubeadm.go:322] OS: Linux
	I1212 00:12:23.028417 1141875 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:12:23.028467 1141875 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:12:23.028516 1141875 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:12:23.028566 1141875 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:12:23.028615 1141875 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:12:23.028670 1141875 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:12:23.028717 1141875 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 00:12:23.028765 1141875 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 00:12:23.028816 1141875 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 00:12:23.115363 1141875 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:12:23.115505 1141875 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:12:23.115611 1141875 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:12:23.383669 1141875 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:12:23.387667 1141875 out.go:204]   - Generating certificates and keys ...
	I1212 00:12:23.387808 1141875 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:12:23.387900 1141875 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:12:24.061816 1141875 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:12:24.252283 1141875 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:12:24.603072 1141875 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:12:25.409473 1141875 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:12:25.518138 1141875 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:12:25.518305 1141875 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-004867 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:12:25.903331 1141875 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:12:25.903485 1141875 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-004867 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:12:26.191261 1141875 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:12:26.423665 1141875 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:12:27.153519 1141875 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:12:27.153801 1141875 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:12:27.969209 1141875 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:12:29.132793 1141875 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:12:29.558674 1141875 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:12:29.697307 1141875 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:12:29.698191 1141875 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:12:29.700873 1141875 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:12:29.705438 1141875 out.go:204]   - Booting up control plane ...
	I1212 00:12:29.705560 1141875 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:12:29.705643 1141875 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:12:29.705716 1141875 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:12:29.721638 1141875 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:12:29.722552 1141875 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:12:29.722840 1141875 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:12:29.830471 1141875 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:12:37.335675 1141875 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504594 seconds
	I1212 00:12:37.335819 1141875 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:12:37.351613 1141875 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:12:37.876694 1141875 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:12:37.876882 1141875 kubeadm.go:322] [mark-control-plane] Marking the node addons-004867 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:12:38.387994 1141875 kubeadm.go:322] [bootstrap-token] Using token: cstusx.5zjgkfxwq7ksjxyr
	I1212 00:12:38.389872 1141875 out.go:204]   - Configuring RBAC rules ...
	I1212 00:12:38.389997 1141875 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:12:38.396208 1141875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:12:38.406080 1141875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:12:38.409961 1141875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:12:38.415686 1141875 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:12:38.419758 1141875 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:12:38.433995 1141875 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:12:38.681557 1141875 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:12:38.803391 1141875 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:12:38.804775 1141875 kubeadm.go:322] 
	I1212 00:12:38.804852 1141875 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:12:38.804863 1141875 kubeadm.go:322] 
	I1212 00:12:38.804953 1141875 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:12:38.804963 1141875 kubeadm.go:322] 
	I1212 00:12:38.804988 1141875 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:12:38.820223 1141875 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:12:38.820288 1141875 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:12:38.820298 1141875 kubeadm.go:322] 
	I1212 00:12:38.820350 1141875 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 00:12:38.820360 1141875 kubeadm.go:322] 
	I1212 00:12:38.820405 1141875 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:12:38.820414 1141875 kubeadm.go:322] 
	I1212 00:12:38.820468 1141875 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:12:38.820544 1141875 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:12:38.820615 1141875 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:12:38.820624 1141875 kubeadm.go:322] 
	I1212 00:12:38.820707 1141875 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:12:38.820784 1141875 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:12:38.820793 1141875 kubeadm.go:322] 
	I1212 00:12:38.820872 1141875 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cstusx.5zjgkfxwq7ksjxyr \
	I1212 00:12:38.820982 1141875 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5475a393936b6bc511cacca1c76e18c5ea4ff503b753104aaff3ee2c1a2497ed \
	I1212 00:12:38.821011 1141875 kubeadm.go:322] 	--control-plane 
	I1212 00:12:38.821021 1141875 kubeadm.go:322] 
	I1212 00:12:38.821101 1141875 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:12:38.821110 1141875 kubeadm.go:322] 
	I1212 00:12:38.821187 1141875 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cstusx.5zjgkfxwq7ksjxyr \
	I1212 00:12:38.821286 1141875 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5475a393936b6bc511cacca1c76e18c5ea4ff503b753104aaff3ee2c1a2497ed 
	I1212 00:12:38.823872 1141875 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:12:38.824037 1141875 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:12:38.824074 1141875 cni.go:84] Creating CNI manager for ""
	I1212 00:12:38.824088 1141875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:12:38.826102 1141875 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:12:38.827887 1141875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:12:38.834658 1141875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:12:38.834681 1141875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:12:38.872979 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:12:39.865351 1141875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:12:39.865429 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:39.865483 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=addons-004867 minikube.k8s.io/updated_at=2023_12_12T00_12_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.080796 1141875 ops.go:34] apiserver oom_adj: -16
	I1212 00:12:40.080915 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.180952 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.788633 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:41.289008 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:41.788002 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:42.288713 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:42.789037 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:43.288735 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:43.788039 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:44.288780 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:44.788530 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:45.288721 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:45.788289 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:46.288974 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:46.788205 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:47.288090 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:47.788089 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:48.288316 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:48.788568 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:49.288535 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:49.788908 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:50.288798 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:50.788617 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:51.288523 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:51.788524 1141875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:51.941240 1141875 kubeadm.go:1088] duration metric: took 12.075875432s to wait for elevateKubeSystemPrivileges.
	I1212 00:12:51.941273 1141875 kubeadm.go:406] StartCluster complete in 29.091421769s
	I1212 00:12:51.941291 1141875 settings.go:142] acquiring lock: {Name:mk888158b3cbabbb2583b6a6f74ff62a9621d5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:51.942169 1141875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:12:51.942594 1141875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/kubeconfig: {Name:mkea8ea25a391ae5db2568a02e638c76b0d6995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:51.945010 1141875 config.go:182] Loaded profile config "addons-004867": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:12:51.945055 1141875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:12:51.945297 1141875 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 00:12:51.945380 1141875 addons.go:69] Setting volumesnapshots=true in profile "addons-004867"
	I1212 00:12:51.945398 1141875 addons.go:231] Setting addon volumesnapshots=true in "addons-004867"
	I1212 00:12:51.945442 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:51.945926 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.946269 1141875 addons.go:69] Setting cloud-spanner=true in profile "addons-004867"
	I1212 00:12:51.946286 1141875 addons.go:231] Setting addon cloud-spanner=true in "addons-004867"
	I1212 00:12:51.946319 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:51.946695 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.947223 1141875 addons.go:69] Setting metrics-server=true in profile "addons-004867"
	I1212 00:12:51.947271 1141875 addons.go:231] Setting addon metrics-server=true in "addons-004867"
	I1212 00:12:51.947343 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:51.947869 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.948272 1141875 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-004867"
	I1212 00:12:51.948292 1141875 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-004867"
	I1212 00:12:51.948322 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:51.948769 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.956757 1141875 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-004867"
	I1212 00:12:51.957228 1141875 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-004867"
	I1212 00:12:51.957434 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:51.957029 1141875 addons.go:69] Setting default-storageclass=true in profile "addons-004867"
	I1212 00:12:51.957570 1141875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-004867"
	I1212 00:12:51.957891 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.957042 1141875 addons.go:69] Setting registry=true in profile "addons-004867"
	I1212 00:12:51.966012 1141875 addons.go:231] Setting addon registry=true in "addons-004867"
	I1212 00:12:51.974013 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.957045 1141875 addons.go:69] Setting gcp-auth=true in profile "addons-004867"
	I1212 00:12:51.974338 1141875 mustload.go:65] Loading cluster: addons-004867
	I1212 00:12:51.957051 1141875 addons.go:69] Setting ingress=true in profile "addons-004867"
	I1212 00:12:51.957049 1141875 addons.go:69] Setting storage-provisioner=true in profile "addons-004867"
	I1212 00:12:51.957055 1141875 addons.go:69] Setting ingress-dns=true in profile "addons-004867"
	I1212 00:12:51.957056 1141875 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-004867"
	I1212 00:12:51.957059 1141875 addons.go:69] Setting inspektor-gadget=true in profile "addons-004867"
	I1212 00:12:51.975009 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:51.989614 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:51.996808 1141875 config.go:182] Loaded profile config "addons-004867": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:12:51.997260 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.020941 1141875 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-004867"
	I1212 00:12:52.021322 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.003412 1141875 addons.go:231] Setting addon ingress=true in "addons-004867"
	I1212 00:12:52.034647 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.003444 1141875 addons.go:231] Setting addon storage-provisioner=true in "addons-004867"
	I1212 00:12:52.035077 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.035513 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.046042 1141875 addons.go:231] Setting addon inspektor-gadget=true in "addons-004867"
	I1212 00:12:52.046115 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.046558 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.003459 1141875 addons.go:231] Setting addon ingress-dns=true in "addons-004867"
	I1212 00:12:52.067201 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.071920 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.083360 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.103928 1141875 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 00:12:52.107174 1141875 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 00:12:52.107196 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 00:12:52.107265 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.100040 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.179038 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 00:12:52.189428 1141875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 00:12:52.189453 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 00:12:52.189543 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.185013 1141875 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 00:12:52.192374 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 00:12:52.194632 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 00:12:52.192622 1141875 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 00:12:52.192631 1141875 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 00:12:52.204849 1141875 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:12:52.204932 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 00:12:52.205026 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.225080 1141875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:12:52.227188 1141875 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:12:52.227211 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:12:52.227283 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.233169 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 00:12:52.198390 1141875 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 00:12:52.237258 1141875 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-004867" context rescaled to 1 replicas
	I1212 00:12:52.251406 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 00:12:52.263233 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 00:12:52.255918 1141875 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 00:12:52.246979 1141875 addons.go:231] Setting addon default-storageclass=true in "addons-004867"
	I1212 00:12:52.255945 1141875 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:12:52.265968 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 00:12:52.265997 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.276584 1141875 out.go:177] * Verifying Kubernetes components...
	I1212 00:12:52.278564 1141875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:52.274951 1141875 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 00:12:52.278898 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 00:12:52.278959 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.285038 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.274961 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 00:12:52.274966 1141875 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 00:12:52.275036 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.311785 1141875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 00:12:52.315412 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 00:12:52.315433 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 00:12:52.315503 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.348919 1141875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:52.351022 1141875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:52.353984 1141875 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:12:52.354012 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 00:12:52.354083 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.356086 1141875 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-004867"
	I1212 00:12:52.356150 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:52.356623 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:52.346636 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 00:12:52.397085 1141875 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 00:12:52.413711 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 00:12:52.413741 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 00:12:52.413826 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.400930 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.363841 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.442149 1141875 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 00:12:52.445032 1141875 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:12:52.445097 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 00:12:52.445194 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.450801 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.478795 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.496242 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.519071 1141875 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:12:52.519093 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:12:52.519155 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.535473 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.587577 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.598922 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.602596 1141875 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 00:12:52.606736 1141875 out.go:177]   - Using image docker.io/busybox:stable
	I1212 00:12:52.608663 1141875 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:12:52.608692 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 00:12:52.608762 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:52.628698 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.642349 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.647130 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	W1212 00:12:52.658234 1141875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 00:12:52.658269 1141875 retry.go:31] will retry after 307.462209ms: ssh: handshake failed: EOF
	I1212 00:12:52.676080 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:52.787091 1141875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:12:52.787949 1141875 node_ready.go:35] waiting up to 6m0s for node "addons-004867" to be "Ready" ...
	I1212 00:12:52.793390 1141875 node_ready.go:49] node "addons-004867" has status "Ready":"True"
	I1212 00:12:52.793413 1141875 node_ready.go:38] duration metric: took 5.442079ms waiting for node "addons-004867" to be "Ready" ...
	I1212 00:12:52.793424 1141875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:12:52.803613 1141875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace to be "Ready" ...
	I1212 00:12:53.086515 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:12:53.141961 1141875 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 00:12:53.141986 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 00:12:53.212341 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:12:53.281982 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:12:53.341888 1141875 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 00:12:53.341916 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 00:12:53.350522 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 00:12:53.366477 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:12:53.370211 1141875 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 00:12:53.370278 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 00:12:53.471748 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:12:53.479261 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 00:12:53.479339 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 00:12:53.490346 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 00:12:53.490417 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 00:12:53.520727 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:12:53.549358 1141875 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 00:12:53.549423 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 00:12:53.598700 1141875 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 00:12:53.598776 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 00:12:53.656287 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 00:12:53.656353 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 00:12:53.670362 1141875 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:12:53.670438 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 00:12:53.724339 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 00:12:53.724412 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 00:12:53.880718 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 00:12:53.880784 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 00:12:53.915050 1141875 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 00:12:53.915118 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 00:12:53.985667 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 00:12:53.985695 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 00:12:54.018474 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 00:12:54.018504 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 00:12:54.023515 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:12:54.030241 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 00:12:54.030271 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 00:12:54.206164 1141875 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:12:54.206192 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 00:12:54.210913 1141875 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:54.210936 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 00:12:54.257545 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 00:12:54.257581 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 00:12:54.283651 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 00:12:54.283679 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 00:12:54.393006 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:12:54.467423 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:54.522768 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 00:12:54.522795 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 00:12:54.544821 1141875 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 00:12:54.544847 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 00:12:54.732611 1141875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 00:12:54.732637 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 00:12:54.737355 1141875 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 00:12:54.737379 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 00:12:54.831711 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:12:54.965857 1141875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 00:12:54.965891 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 00:12:54.969519 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 00:12:54.974925 1141875 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.187798316s)
	I1212 00:12:54.974955 1141875 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:12:55.135994 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.049436174s)
	I1212 00:12:55.136067 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923654998s)
	I1212 00:12:55.230995 1141875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 00:12:55.231021 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 00:12:55.519399 1141875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 00:12:55.519424 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 00:12:55.757707 1141875 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:12:55.757734 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 00:12:55.972156 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:12:56.835316 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:12:57.102489 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.820453413s)
	I1212 00:12:57.102716 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.736147158s)
	I1212 00:12:57.102649 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.752079257s)
	I1212 00:12:58.996250 1141875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 00:12:58.996357 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:59.031659 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:59.340262 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:12:59.374340 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.85352702s)
	I1212 00:12:59.374435 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.350887304s)
	I1212 00:12:59.374577 1141875 addons.go:467] Verifying addon registry=true in "addons-004867"
	I1212 00:12:59.374635 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.907173006s)
	W1212 00:12:59.374666 1141875 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:12:59.374691 1141875 retry.go:31] will retry after 247.102291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:12:59.377157 1141875 out.go:177] * Verifying registry addon...
	I1212 00:12:59.374798 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.405248624s)
	I1212 00:12:59.374520 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.981472323s)
	I1212 00:12:59.377238 1141875 addons.go:467] Verifying addon metrics-server=true in "addons-004867"
	I1212 00:12:59.375969 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.904129599s)
	I1212 00:12:59.377251 1141875 addons.go:467] Verifying addon ingress=true in "addons-004867"
	I1212 00:12:59.379219 1141875 out.go:177] * Verifying ingress addon...
	I1212 00:12:59.381982 1141875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 00:12:59.382778 1141875 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 00:12:59.391758 1141875 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 00:12:59.391777 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.392416 1141875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:12:59.392425 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:59.413586 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.414231 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:59.466478 1141875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 00:12:59.534711 1141875 addons.go:231] Setting addon gcp-auth=true in "addons-004867"
	I1212 00:12:59.534801 1141875 host.go:66] Checking if "addons-004867" exists ...
	I1212 00:12:59.535286 1141875 cli_runner.go:164] Run: docker container inspect addons-004867 --format={{.State.Status}}
	I1212 00:12:59.568976 1141875 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 00:12:59.569047 1141875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-004867
	I1212 00:12:59.599155 1141875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/addons-004867/id_rsa Username:docker}
	I1212 00:12:59.622327 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:59.921258 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.922846 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:00.434608 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.455202 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:00.923000 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.928290 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:01.151723 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.179467786s)
	I1212 00:13:01.151798 1141875 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-004867"
	I1212 00:13:01.154519 1141875 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 00:13:01.158337 1141875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 00:13:01.167775 1141875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:13:01.167804 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.177099 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.422676 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:01.423046 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.526368 1141875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.903991557s)
	I1212 00:13:01.526494 1141875 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.957493217s)
	I1212 00:13:01.532255 1141875 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 00:13:01.534270 1141875 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:13:01.538375 1141875 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 00:13:01.538442 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 00:13:01.566626 1141875 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 00:13:01.566699 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 00:13:01.594369 1141875 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:13:01.594443 1141875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 00:13:01.618934 1141875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:13:01.684540 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.833590 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:01.920609 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.922546 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.183814 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.437056 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.445176 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.456825 1141875 addons.go:467] Verifying addon gcp-auth=true in "addons-004867"
	I1212 00:13:02.459151 1141875 out.go:177] * Verifying gcp-auth addon...
	I1212 00:13:02.462796 1141875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 00:13:02.468167 1141875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 00:13:02.468191 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.473037 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.684244 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.921774 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.922951 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.977968 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.184690 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.420922 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.423990 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:03.478136 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.682892 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.921085 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:03.921793 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.976876 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.183204 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.345007 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:04.421539 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.422070 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:04.477221 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.689079 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.919397 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.920004 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:04.976937 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.182977 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.418607 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.420094 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:05.478301 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.682649 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.922729 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:05.923609 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.977129 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.183401 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.424140 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.424680 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:06.477503 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.683835 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.833802 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:06.920642 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.922085 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:06.976549 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.183657 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.419288 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.421273 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:07.477459 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.683475 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.949502 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:07.950734 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.982207 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.184088 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.421516 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:08.422403 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.477416 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.683922 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.835966 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:08.921936 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:08.923121 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.981244 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.183457 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.419877 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.422014 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:09.477108 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.683439 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.919721 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.920488 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:09.977634 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.183447 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.421078 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.423745 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:10.479323 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.685079 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.918968 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:10.919187 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.980094 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.183430 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.332833 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:11.418870 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:11.420039 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.476976 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.682440 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.919332 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.920588 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:11.976661 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.183409 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.419294 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.419727 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:12.476997 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.683095 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.920402 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:12.921840 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.977536 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.183431 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.419525 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.420200 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:13.477710 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.683413 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.833775 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:13.919195 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.919998 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:13.977348 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.182856 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.419272 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:14.419716 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.477234 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.683493 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.918770 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:14.919861 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.977581 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.183195 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.419478 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:15.419487 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.477457 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.682461 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.919153 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.920078 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:15.976774 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.183247 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.332985 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:16.419590 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:16.420704 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:16.477429 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.682766 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.918619 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:16.920128 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:16.977122 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.182695 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.421495 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:17.423943 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:17.478234 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.683390 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.920002 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:17.921299 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:17.976845 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.183017 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.418483 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:18.419807 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:18.477472 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.683131 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.833016 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:18.919361 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:18.920337 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:18.977921 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.183752 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.420535 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:19.421359 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:19.476974 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.683480 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.919260 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:19.920055 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:19.977032 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.182936 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.418888 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:20.419667 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:20.476781 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.683633 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.833237 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:20.919640 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:20.920668 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:20.980008 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.182710 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.419972 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:21.420024 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:21.476545 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.682559 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.918731 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:21.920086 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:21.977095 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.183036 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.417880 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:22.420340 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:22.477181 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.683250 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.920688 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:22.924975 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:22.978770 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.183258 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:23.332323 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:23.419162 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:23.419377 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:23.477165 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.684409 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:23.935810 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:23.945899 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:23.977971 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.183726 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:24.419133 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:24.420287 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:24.477150 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.683272 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:24.919956 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:24.921044 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:24.976442 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.184545 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:25.333864 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:25.420171 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:25.420764 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:25.477439 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.683451 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:25.920983 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:25.923424 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:25.977245 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.183015 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:26.419732 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:26.420499 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:26.477735 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.683575 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:26.921708 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:26.922999 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:26.978907 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.184233 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:27.420358 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:27.424334 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:27.478157 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.684237 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:27.840311 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:27.920205 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:27.925810 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:27.982221 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.185292 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:28.427796 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:28.437236 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:28.477682 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.690830 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:28.920846 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:28.922111 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:28.977471 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.183393 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:29.428686 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:29.429252 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:29.477935 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.684021 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:29.926001 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:29.927171 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:29.976943 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.185014 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:30.334105 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:30.419759 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:30.422458 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:30.478783 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.685885 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:30.920319 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:30.922435 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:30.976965 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.183552 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:31.421371 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:31.422647 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:31.478383 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.684733 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:31.920830 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:31.924469 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:31.977731 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:32.188918 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:32.335663 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:32.421235 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:32.421612 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:32.478204 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:32.683034 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:32.920695 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:32.924296 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:32.977617 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:33.188613 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:33.420162 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:33.421245 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:33.478196 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:33.683182 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:33.919549 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:33.921175 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:33.976679 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:34.183224 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:34.423012 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:34.423909 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:34.477116 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:34.683097 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:34.832772 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:34.918363 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:34.920381 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:34.977436 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:35.183912 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:35.430071 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:35.433793 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:35.478105 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:35.686324 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:35.923068 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:35.924992 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:35.976806 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:36.183348 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:36.418134 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:36.419260 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:36.477967 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:36.683505 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:36.833750 1141875 pod_ready.go:102] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:36.921916 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:36.923086 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:36.978206 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:37.183561 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:37.422533 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:37.423707 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:37.477949 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:37.682707 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:37.971436 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:37.972292 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:37.976932 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:38.183137 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:38.378385 1141875 pod_ready.go:92] pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:38.378413 1141875 pod_ready.go:81] duration metric: took 45.574765853s waiting for pod "coredns-5dd5756b68-cdhg6" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.378427 1141875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qch2c" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.387990 1141875 pod_ready.go:97] error getting pod "coredns-5dd5756b68-qch2c" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-qch2c" not found
	I1212 00:13:38.388019 1141875 pod_ready.go:81] duration metric: took 9.58395ms waiting for pod "coredns-5dd5756b68-qch2c" in "kube-system" namespace to be "Ready" ...
	E1212 00:13:38.388043 1141875 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-qch2c" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-qch2c" not found
	I1212 00:13:38.388052 1141875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.396826 1141875 pod_ready.go:92] pod "etcd-addons-004867" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:38.396852 1141875 pod_ready.go:81] duration metric: took 8.789913ms waiting for pod "etcd-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.396878 1141875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.409016 1141875 pod_ready.go:92] pod "kube-apiserver-addons-004867" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:38.409041 1141875 pod_ready.go:81] duration metric: took 12.15496ms waiting for pod "kube-apiserver-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.409054 1141875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.424559 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:38.425038 1141875 pod_ready.go:92] pod "kube-controller-manager-addons-004867" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:38.425061 1141875 pod_ready.go:81] duration metric: took 15.994909ms waiting for pod "kube-controller-manager-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.425074 1141875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mp9fx" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.425717 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:38.477719 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:38.535265 1141875 pod_ready.go:92] pod "kube-proxy-mp9fx" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:38.535333 1141875 pod_ready.go:81] duration metric: took 110.244431ms waiting for pod "kube-proxy-mp9fx" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.535348 1141875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.688831 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:38.920403 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:38.921513 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:38.930895 1141875 pod_ready.go:92] pod "kube-scheduler-addons-004867" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:38.930981 1141875 pod_ready.go:81] duration metric: took 395.623799ms waiting for pod "kube-scheduler-addons-004867" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:38.931013 1141875 pod_ready.go:38] duration metric: took 46.137570674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:13:38.931067 1141875 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:13:38.931179 1141875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:13:38.956737 1141875 api_server.go:72] duration metric: took 46.690717552s to wait for apiserver process to appear ...
	I1212 00:13:38.956802 1141875 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:13:38.956834 1141875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:13:38.966771 1141875 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:13:38.968336 1141875 api_server.go:141] control plane version: v1.28.4
	I1212 00:13:38.968391 1141875 api_server.go:131] duration metric: took 11.567797ms to wait for apiserver health ...
	I1212 00:13:38.968416 1141875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:13:38.981188 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:39.139230 1141875 system_pods.go:59] 18 kube-system pods found
	I1212 00:13:39.139352 1141875 system_pods.go:61] "coredns-5dd5756b68-cdhg6" [f4f26300-1113-40f2-aef9-4752a2321efc] Running
	I1212 00:13:39.139380 1141875 system_pods.go:61] "csi-hostpath-attacher-0" [4a7647b2-5bab-4c90-a634-5c876a132fd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 00:13:39.139417 1141875 system_pods.go:61] "csi-hostpath-resizer-0" [c8e7408f-7573-4a44-b6bc-2b3d2a3cf5ca] Running
	I1212 00:13:39.139451 1141875 system_pods.go:61] "csi-hostpathplugin-s4zbh" [3da201eb-db62-4426-9fde-64274906b635] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 00:13:39.139475 1141875 system_pods.go:61] "etcd-addons-004867" [b62bc234-7fe1-4f6a-a339-62c37c437ec5] Running
	I1212 00:13:39.139498 1141875 system_pods.go:61] "kindnet-4g5jj" [d96b698c-0c9b-4eb0-92d0-460d4ab33d21] Running
	I1212 00:13:39.139530 1141875 system_pods.go:61] "kube-apiserver-addons-004867" [4f6402c3-245a-4280-86f6-51489da25e4b] Running
	I1212 00:13:39.139559 1141875 system_pods.go:61] "kube-controller-manager-addons-004867" [8c334ac6-5bce-4de4-bccc-6ddec57020ad] Running
	I1212 00:13:39.139585 1141875 system_pods.go:61] "kube-ingress-dns-minikube" [8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:13:39.139608 1141875 system_pods.go:61] "kube-proxy-mp9fx" [49611a5c-f56d-45f9-bb20-03a0b66e093a] Running
	I1212 00:13:39.139641 1141875 system_pods.go:61] "kube-scheduler-addons-004867" [c594f6ae-ff48-4fea-ab94-e07c13ed413b] Running
	I1212 00:13:39.139669 1141875 system_pods.go:61] "metrics-server-7c66d45ddc-q52pq" [11e8547a-29fb-4d97-8ce9-2f39d348f2b0] Running
	I1212 00:13:39.139695 1141875 system_pods.go:61] "nvidia-device-plugin-daemonset-jw7m9" [060065fa-bb93-4bde-a940-e2f0d2d797f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 00:13:39.139720 1141875 system_pods.go:61] "registry-mb5bh" [e49fc75b-bdf3-4bc7-974c-ad2b60ad2aa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:13:39.139756 1141875 system_pods.go:61] "registry-proxy-v6fdt" [9b09d19d-de70-4521-800a-de11772f56c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 00:13:39.139784 1141875 system_pods.go:61] "snapshot-controller-58dbcc7b99-9fwxg" [5523230b-9000-4270-8a6e-2f31e94b3215] Running
	I1212 00:13:39.139809 1141875 system_pods.go:61] "snapshot-controller-58dbcc7b99-r7cx4" [cf52eca2-1a27-4c7c-b480-94776476a0b1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:13:39.139832 1141875 system_pods.go:61] "storage-provisioner" [2f1ebd03-35ce-4273-b334-87e0945cf35a] Running
	I1212 00:13:39.139864 1141875 system_pods.go:74] duration metric: took 171.428418ms to wait for pod list to return data ...
	I1212 00:13:39.139891 1141875 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:13:39.186971 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:39.330013 1141875 default_sa.go:45] found service account: "default"
	I1212 00:13:39.330045 1141875 default_sa.go:55] duration metric: took 190.13297ms for default service account to be created ...
	I1212 00:13:39.330056 1141875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:13:39.419485 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:39.420359 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:39.483364 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:39.548756 1141875 system_pods.go:86] 18 kube-system pods found
	I1212 00:13:39.548840 1141875 system_pods.go:89] "coredns-5dd5756b68-cdhg6" [f4f26300-1113-40f2-aef9-4752a2321efc] Running
	I1212 00:13:39.548875 1141875 system_pods.go:89] "csi-hostpath-attacher-0" [4a7647b2-5bab-4c90-a634-5c876a132fd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 00:13:39.548910 1141875 system_pods.go:89] "csi-hostpath-resizer-0" [c8e7408f-7573-4a44-b6bc-2b3d2a3cf5ca] Running
	I1212 00:13:39.548940 1141875 system_pods.go:89] "csi-hostpathplugin-s4zbh" [3da201eb-db62-4426-9fde-64274906b635] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 00:13:39.548961 1141875 system_pods.go:89] "etcd-addons-004867" [b62bc234-7fe1-4f6a-a339-62c37c437ec5] Running
	I1212 00:13:39.548997 1141875 system_pods.go:89] "kindnet-4g5jj" [d96b698c-0c9b-4eb0-92d0-460d4ab33d21] Running
	I1212 00:13:39.549020 1141875 system_pods.go:89] "kube-apiserver-addons-004867" [4f6402c3-245a-4280-86f6-51489da25e4b] Running
	I1212 00:13:39.549041 1141875 system_pods.go:89] "kube-controller-manager-addons-004867" [8c334ac6-5bce-4de4-bccc-6ddec57020ad] Running
	I1212 00:13:39.549078 1141875 system_pods.go:89] "kube-ingress-dns-minikube" [8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:13:39.549101 1141875 system_pods.go:89] "kube-proxy-mp9fx" [49611a5c-f56d-45f9-bb20-03a0b66e093a] Running
	I1212 00:13:39.549122 1141875 system_pods.go:89] "kube-scheduler-addons-004867" [c594f6ae-ff48-4fea-ab94-e07c13ed413b] Running
	I1212 00:13:39.549184 1141875 system_pods.go:89] "metrics-server-7c66d45ddc-q52pq" [11e8547a-29fb-4d97-8ce9-2f39d348f2b0] Running
	I1212 00:13:39.549213 1141875 system_pods.go:89] "nvidia-device-plugin-daemonset-jw7m9" [060065fa-bb93-4bde-a940-e2f0d2d797f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 00:13:39.549236 1141875 system_pods.go:89] "registry-mb5bh" [e49fc75b-bdf3-4bc7-974c-ad2b60ad2aa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:13:39.549289 1141875 system_pods.go:89] "registry-proxy-v6fdt" [9b09d19d-de70-4521-800a-de11772f56c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 00:13:39.549307 1141875 system_pods.go:89] "snapshot-controller-58dbcc7b99-9fwxg" [5523230b-9000-4270-8a6e-2f31e94b3215] Running
	I1212 00:13:39.549344 1141875 system_pods.go:89] "snapshot-controller-58dbcc7b99-r7cx4" [cf52eca2-1a27-4c7c-b480-94776476a0b1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:13:39.549369 1141875 system_pods.go:89] "storage-provisioner" [2f1ebd03-35ce-4273-b334-87e0945cf35a] Running
	I1212 00:13:39.549392 1141875 system_pods.go:126] duration metric: took 219.330134ms to wait for k8s-apps to be running ...
	I1212 00:13:39.550316 1141875 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:13:39.550412 1141875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:13:39.583895 1141875 system_svc.go:56] duration metric: took 33.575062ms WaitForService to wait for kubelet.
	I1212 00:13:39.583966 1141875 kubeadm.go:581] duration metric: took 47.317951122s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:13:39.584000 1141875 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:13:39.684285 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:39.730281 1141875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:13:39.730317 1141875 node_conditions.go:123] node cpu capacity is 2
	I1212 00:13:39.730330 1141875 node_conditions.go:105] duration metric: took 146.310077ms to run NodePressure ...
	I1212 00:13:39.730343 1141875 start.go:228] waiting for startup goroutines ...
	I1212 00:13:39.919367 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:39.920165 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:39.976768 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:40.183746 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:40.418797 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:40.420156 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:40.476548 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:40.684999 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:40.919427 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:40.921476 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:40.977514 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:41.183992 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:41.422215 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:41.423152 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:41.477218 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:41.686499 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:41.917830 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:41.921977 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:41.977359 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:42.185339 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:42.421614 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:42.422807 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:42.479042 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:42.683526 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:42.919626 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:42.920471 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:42.977501 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:43.182878 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:43.419285 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:43.420439 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:43.477399 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:43.685249 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:43.922198 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:43.923626 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:43.987957 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:44.184389 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:44.423698 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:44.426536 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:44.477026 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:44.683229 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:44.918890 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:44.920210 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:44.976813 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:45.183725 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:45.419822 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:45.422322 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:45.477349 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:45.683709 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:45.923817 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:45.925313 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:45.978981 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:46.184660 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:46.421771 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:46.423085 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:46.484119 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:46.684103 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:46.918973 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:46.921401 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:46.978224 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:47.183800 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:47.422654 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:47.423231 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:47.477596 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:47.683206 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:47.921572 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:47.922082 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:47.976777 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:48.184975 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:48.423779 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:48.424645 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:48.477685 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:48.683173 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:48.919893 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:48.925725 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:48.978014 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:49.186744 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:49.419812 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:49.420724 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:49.476485 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:49.682831 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:49.921267 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:49.922698 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:49.977608 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:50.182963 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:50.419469 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:50.419652 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:50.477081 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:50.682774 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:50.919260 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:50.920646 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:50.977384 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:51.184102 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:51.419402 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:51.420314 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:51.478251 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:51.682830 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:51.922423 1141875 kapi.go:107] duration metric: took 52.540438668s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 00:13:51.927889 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:51.976975 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:52.183935 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:52.418745 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:52.477797 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:52.683954 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:52.918347 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:52.977399 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:53.184669 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:53.419227 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:53.477087 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:53.682857 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:53.917933 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:53.977851 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:54.183611 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:54.418984 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:54.476985 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:54.683143 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:54.918785 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:54.977297 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:55.183546 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.418922 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:55.477741 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:55.683283 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.918468 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:55.977010 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:56.182929 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:56.418622 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:56.477565 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:56.683176 1141875 kapi.go:107] duration metric: took 55.52484252s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:13:56.918910 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:56.977425 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:57.418636 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:57.477387 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:57.920788 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:57.977420 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:58.418474 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:58.477166 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:58.918296 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:58.976688 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:59.418459 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:59.477570 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:59.918982 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:59.976478 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:00.418774 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:00.477772 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:00.917775 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:00.977621 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:01.418640 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:01.477374 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:01.918502 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:01.977679 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:02.418831 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:02.477419 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:02.918121 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:02.976700 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:03.418973 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:03.477888 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:03.918372 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:03.976966 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:04.420671 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:04.478253 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:04.918445 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:04.977225 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:05.418112 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:05.477538 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:05.919523 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:05.977144 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:06.418494 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:06.478154 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:06.918686 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:06.977519 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:07.419170 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:07.477289 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:07.920614 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:07.977309 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:08.419284 1141875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:08.477533 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:08.918317 1141875 kapi.go:107] duration metric: took 1m9.535535328s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:14:08.976901 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:09.477380 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:09.979601 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:10.479905 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:10.976919 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:11.476807 1141875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:11.976811 1141875 kapi.go:107] duration metric: took 1m9.514014404s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:14:11.978813 1141875 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-004867 cluster.
	I1212 00:14:11.980826 1141875 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:14:11.982660 1141875 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:14:11.984945 1141875 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, storage-provisioner, cloud-spanner, storage-provisioner-rancher, ingress-dns, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1212 00:14:11.987001 1141875 addons.go:502] enable addons completed in 1m20.041746561s: enabled=[nvidia-device-plugin default-storageclass storage-provisioner cloud-spanner storage-provisioner-rancher ingress-dns inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1212 00:14:11.987041 1141875 start.go:233] waiting for cluster config update ...
	I1212 00:14:11.987059 1141875 start.go:242] writing updated cluster config ...
	I1212 00:14:11.987401 1141875 ssh_runner.go:195] Run: rm -f paused
	I1212 00:14:12.300600 1141875 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 00:14:12.303012 1141875 out.go:177] * Done! kubectl is now configured to use "addons-004867" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e6c8206c996d       dd1b12fcb6097       7 seconds ago       Exited              hello-world-app           2                   2d2d583f91ab4       hello-world-app-5d77478584-zk6d6
	14275b0d5645b       f09fc93534f6a       34 seconds ago      Running             nginx                     0                   57f5bd3a7a7d8       nginx
	d3a1e8a3fd4be       14b04e7ab95a8       43 seconds ago      Running             headlamp                  0                   81644802a6e7d       headlamp-777fd4b855-jg2nq
	47b3c20284c88       2a5f29343eb03       2 minutes ago       Running             gcp-auth                  0                   5d3abfae15c6b       gcp-auth-d4c87556c-9hxvz
	62f7c45a7960c       af594c6a879f2       2 minutes ago       Exited              patch                     0                   ffe532bd97b0c       ingress-nginx-admission-patch-gxj6z
	eca12abc86e0f       97e04611ad434       2 minutes ago       Running             coredns                   0                   203dd3ac25640       coredns-5dd5756b68-cdhg6
	5bb912616daa6       af594c6a879f2       2 minutes ago       Exited              create                    0                   8879064aae8ec       ingress-nginx-admission-create-25b5t
	1ffa537072656       ba04bb24b9575       3 minutes ago       Running             storage-provisioner       0                   812b0f2c8a6fa       storage-provisioner
	e054034899fc2       3ca3ca488cf13       3 minutes ago       Running             kube-proxy                0                   bcb761dd9c901       kube-proxy-mp9fx
	9219e81537498       04b4eaa3d3db8       3 minutes ago       Running             kindnet-cni               0                   c8515a493494a       kindnet-4g5jj
	aaee43f567a41       9961cbceaf234       3 minutes ago       Running             kube-controller-manager   0                   478bd16d05a11       kube-controller-manager-addons-004867
	81d8b0b7b78c8       05c284c929889       3 minutes ago       Running             kube-scheduler            0                   4f9c44fde499a       kube-scheduler-addons-004867
	c7ad50105f74f       04b4c447bb9d4       3 minutes ago       Running             kube-apiserver            0                   5088b0816ca0e       kube-apiserver-addons-004867
	a135dc64bbe22       9cdd6470f48c8       3 minutes ago       Running             etcd                      0                   5065898f58287       etcd-addons-004867
	
	* 
	* ==> containerd <==
	* Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.811942273Z" level=info msg="StartContainer for \"6e6c8206c996db4ead8d810610b273c1aba62038c968e5c7b26b5d7453b17a69\""
	Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.872808480Z" level=info msg="StartContainer for \"6e6c8206c996db4ead8d810610b273c1aba62038c968e5c7b26b5d7453b17a69\" returns successfully"
	Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.902453462Z" level=info msg="shim disconnected" id=6e6c8206c996db4ead8d810610b273c1aba62038c968e5c7b26b5d7453b17a69
	Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.902521416Z" level=warning msg="cleaning up after shim disconnected" id=6e6c8206c996db4ead8d810610b273c1aba62038c968e5c7b26b5d7453b17a69 namespace=k8s.io
	Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.902532723Z" level=info msg="cleaning up dead shim"
	Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.913740121Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:16:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11124 runtime=io.containerd.runc.v2\n"
	Dec 12 00:16:03 addons-004867 containerd[748]: time="2023-12-12T00:16:03.994877532Z" level=info msg="RemoveContainer for \"45f832b77906343538d419aac3cfadc1b4e78aaee0ce800a1ee4cb78a431fdd6\""
	Dec 12 00:16:04 addons-004867 containerd[748]: time="2023-12-12T00:16:04.011562447Z" level=info msg="RemoveContainer for \"45f832b77906343538d419aac3cfadc1b4e78aaee0ce800a1ee4cb78a431fdd6\" returns successfully"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.768467090Z" level=info msg="Kill container \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\""
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.840563226Z" level=info msg="shim disconnected" id=fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.840630319Z" level=warning msg="cleaning up after shim disconnected" id=fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46 namespace=k8s.io
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.840643406Z" level=info msg="cleaning up dead shim"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.850929711Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:16:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11157 runtime=io.containerd.runc.v2\n"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.854221364Z" level=info msg="StopContainer for \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\" returns successfully"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.854979045Z" level=info msg="StopPodSandbox for \"203dbed680efeb0cf93d70eac3d1d6185b19d69b1e2822d3248ea4c4da6b43f9\""
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.855146027Z" level=info msg="Container to stop \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.890922757Z" level=info msg="shim disconnected" id=203dbed680efeb0cf93d70eac3d1d6185b19d69b1e2822d3248ea4c4da6b43f9
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.891396198Z" level=warning msg="cleaning up after shim disconnected" id=203dbed680efeb0cf93d70eac3d1d6185b19d69b1e2822d3248ea4c4da6b43f9 namespace=k8s.io
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.891526338Z" level=info msg="cleaning up dead shim"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.903038514Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:16:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11188 runtime=io.containerd.runc.v2\n"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.960313369Z" level=info msg="TearDown network for sandbox \"203dbed680efeb0cf93d70eac3d1d6185b19d69b1e2822d3248ea4c4da6b43f9\" successfully"
	Dec 12 00:16:05 addons-004867 containerd[748]: time="2023-12-12T00:16:05.960505162Z" level=info msg="StopPodSandbox for \"203dbed680efeb0cf93d70eac3d1d6185b19d69b1e2822d3248ea4c4da6b43f9\" returns successfully"
	Dec 12 00:16:06 addons-004867 containerd[748]: time="2023-12-12T00:16:06.016757182Z" level=info msg="RemoveContainer for \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\""
	Dec 12 00:16:06 addons-004867 containerd[748]: time="2023-12-12T00:16:06.022890973Z" level=info msg="RemoveContainer for \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\" returns successfully"
	Dec 12 00:16:06 addons-004867 containerd[748]: time="2023-12-12T00:16:06.023725946Z" level=error msg="ContainerStatus for \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\": not found"
	
	* 
	* ==> coredns [eca12abc86e0f7be3523255d7263c3005732a1a2ed9a2cff9c6ab0f854e03a9a] <==
	* [INFO] 10.244.0.18:42291 - 6541 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072845s
	[INFO] 10.244.0.18:42291 - 15684 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060192s
	[INFO] 10.244.0.18:42291 - 5245 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059011s
	[INFO] 10.244.0.18:42291 - 2682 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059454s
	[INFO] 10.244.0.18:42291 - 1682 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003493408s
	[INFO] 10.244.0.18:42291 - 64401 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002552623s
	[INFO] 10.244.0.18:42291 - 16457 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115175s
	[INFO] 10.244.0.18:47443 - 11064 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000114395s
	[INFO] 10.244.0.18:47443 - 39084 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069226s
	[INFO] 10.244.0.18:47443 - 43514 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062276s
	[INFO] 10.244.0.18:47443 - 51158 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037612s
	[INFO] 10.244.0.18:47443 - 21629 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059036s
	[INFO] 10.244.0.18:47443 - 44170 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057715s
	[INFO] 10.244.0.18:47443 - 28386 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001274231s
	[INFO] 10.244.0.18:47443 - 60579 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001029622s
	[INFO] 10.244.0.18:47443 - 54764 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070465s
	[INFO] 10.244.0.18:44818 - 40623 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00008352s
	[INFO] 10.244.0.18:44818 - 10901 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065952s
	[INFO] 10.244.0.18:44818 - 17117 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061916s
	[INFO] 10.244.0.18:44818 - 29536 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064836s
	[INFO] 10.244.0.18:44818 - 24341 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072475s
	[INFO] 10.244.0.18:44818 - 44058 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078399s
	[INFO] 10.244.0.18:44818 - 63265 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001190802s
	[INFO] 10.244.0.18:44818 - 50463 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001069826s
	[INFO] 10.244.0.18:44818 - 41313 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077752s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-004867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-004867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=addons-004867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_12_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-004867
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:12:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-004867
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:16:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:15:43 +0000   Tue, 12 Dec 2023 00:12:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:15:43 +0000   Tue, 12 Dec 2023 00:12:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:15:43 +0000   Tue, 12 Dec 2023 00:12:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:15:43 +0000   Tue, 12 Dec 2023 00:12:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-004867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cc334bc2b264036bd01be71428e8255
	  System UUID:                900a6385-375c-40cb-9158-ac2ea91ace53
	  Boot ID:                    6562b840-385e-4140-a0d3-196e503f4900
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zk6d6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-9hxvz                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  headlamp                    headlamp-777fd4b855-jg2nq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 coredns-5dd5756b68-cdhg6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m20s
	  kube-system                 etcd-addons-004867                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m33s
	  kube-system                 kindnet-4g5jj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m20s
	  kube-system                 kube-apiserver-addons-004867             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-controller-manager-addons-004867    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-proxy-mp9fx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-scheduler-addons-004867             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m18s  kube-proxy       
	  Normal  Starting                 3m33s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m33s  kubelet          Node addons-004867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m33s  kubelet          Node addons-004867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m33s  kubelet          Node addons-004867 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m33s  kubelet          Node addons-004867 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m32s  kubelet          Node addons-004867 status is now: NodeReady
	  Normal  RegisteredNode           3m21s  node-controller  Node addons-004867 event: Registered Node addons-004867 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001103] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001114] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +0.004970] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=000000001bb038f1
	[  +0.001195] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000760] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000ce236adb
	[  +0.001178] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +3.628923] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001149] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000c19aa351
	[  +0.001199] FS-Cache: O-key=[8] '4f3e5c0100000000'
	[  +0.000795] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001065] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001178] FS-Cache: N-key=[8] '4f3e5c0100000000'
	[  +0.413575] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000d9ff942f
	[  +0.001137] FS-Cache: O-key=[8] '553e5c0100000000'
	[  +0.000730] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001098] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=000000008357462d
	[  +0.001241] FS-Cache: N-key=[8] '553e5c0100000000'
	
	* 
	* ==> etcd [a135dc64bbe22e772ca5e3e81bb7e77bf4aefdc8e07b96e326d38d77e3d092c3] <==
	* {"level":"info","ts":"2023-12-12T00:12:31.83193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-12T00:12:31.832008Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-12T00:12:31.833443Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T00:12:31.833599Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T00:12:31.833621Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:12:31.833724Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:12:31.833735Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:12:31.911511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T00:12:31.911552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T00:12:31.911567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-12T00:12:31.911588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:12:31.911594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:12:31.911605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-12T00:12:31.911612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:12:31.912435Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-004867 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:12:31.912603Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:12:31.91284Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:12:31.914036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-12T00:12:31.914089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:12:31.914984Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:12:31.915042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:12:31.915051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T00:12:31.915919Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:12:31.915998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:12:31.916019Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [47b3c20284c88ce20d42ab63d3c0e50633955c050b72b0c7ced6f17325f307d3] <==
	* 2023/12/12 00:14:10 GCP Auth Webhook started!
	2023/12/12 00:14:22 Ready to marshal response ...
	2023/12/12 00:14:22 Ready to write response ...
	2023/12/12 00:14:30 Ready to marshal response ...
	2023/12/12 00:14:30 Ready to write response ...
	2023/12/12 00:14:33 Ready to marshal response ...
	2023/12/12 00:14:33 Ready to write response ...
	2023/12/12 00:14:33 Ready to marshal response ...
	2023/12/12 00:14:33 Ready to write response ...
	2023/12/12 00:14:43 Ready to marshal response ...
	2023/12/12 00:14:43 Ready to write response ...
	2023/12/12 00:15:01 Ready to marshal response ...
	2023/12/12 00:15:01 Ready to write response ...
	2023/12/12 00:15:24 Ready to marshal response ...
	2023/12/12 00:15:24 Ready to write response ...
	2023/12/12 00:15:24 Ready to marshal response ...
	2023/12/12 00:15:24 Ready to write response ...
	2023/12/12 00:15:24 Ready to marshal response ...
	2023/12/12 00:15:24 Ready to write response ...
	2023/12/12 00:15:35 Ready to marshal response ...
	2023/12/12 00:15:35 Ready to write response ...
	2023/12/12 00:15:45 Ready to marshal response ...
	2023/12/12 00:15:45 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:16:11 up  6:58,  0 users,  load average: 0.71, 0.94, 0.52
	Linux addons-004867 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [9219e81537498061f40957de6d07bfbd47f58c409832200fcc0db6f8cc1647fe] <==
	* I1212 00:14:03.122817       1 main.go:227] handling current node
	I1212 00:14:13.136325       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:13.141913       1 main.go:227] handling current node
	I1212 00:14:23.146634       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:23.146666       1 main.go:227] handling current node
	I1212 00:14:33.172073       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:33.172101       1 main.go:227] handling current node
	I1212 00:14:43.183890       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:43.184162       1 main.go:227] handling current node
	I1212 00:14:53.195795       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:53.195823       1 main.go:227] handling current node
	I1212 00:15:03.209194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:03.209222       1 main.go:227] handling current node
	I1212 00:15:13.221493       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:13.221634       1 main.go:227] handling current node
	I1212 00:15:23.227626       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:23.227652       1 main.go:227] handling current node
	I1212 00:15:33.240119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:33.240153       1 main.go:227] handling current node
	I1212 00:15:43.246356       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:43.246563       1 main.go:227] handling current node
	I1212 00:15:53.250568       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:53.250598       1 main.go:227] handling current node
	I1212 00:16:03.264093       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:03.264122       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c7ad50105f74ff7dede0ae0493bcb84cbdd0caa0597b45f5a650e0679f740dd0] <==
	* I1212 00:15:16.653503       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:15:16.653554       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:15:16.653590       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:15:16.653616       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:15:16.666836       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:15:16.666897       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:15:16.689916       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:15:16.690140       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:15:16.798652       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:15:16.798717       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:15:16.804978       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:15:16.805026       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 00:15:17.655023       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 00:15:17.808658       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1212 00:15:17.827141       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1212 00:15:24.108262       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.9.5"}
	I1212 00:15:35.009758       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:15:35.334552       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.162.206"}
	I1212 00:15:38.176538       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1212 00:15:38.189553       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1212 00:15:39.203377       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1212 00:15:39.301668       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:15:45.618307       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.135.69"}
	E1212 00:16:02.875610       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1212 00:16:02.932380       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [aaee43f567a41849c000ed62fb506bba6b3e136be5b5a1a397993a5e030cfd4d] <==
	* I1212 00:15:45.448654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.193921ms"
	I1212 00:15:45.448878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="171.691µs"
	I1212 00:15:45.465125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.729µs"
	W1212 00:15:46.814129       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:15:46.814161       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:15:47.961205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.343µs"
	I1212 00:15:48.303983       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1212 00:15:48.964294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.297µs"
	I1212 00:15:49.967793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.751µs"
	I1212 00:15:51.100458       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1212 00:15:51.100496       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:15:51.448424       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1212 00:15:51.448650       1 shared_informer.go:318] Caches are synced for garbage collector
	W1212 00:15:53.668937       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:15:53.668974       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:15:58.701900       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:15:58.701970       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:15:59.964160       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:15:59.964201       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:16:00.230361       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:16:00.234327       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:16:02.730865       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1212 00:16:02.731862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.193µs"
	I1212 00:16:02.747826       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 00:16:04.014879       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.485µs"
	
	* 
	* ==> kube-proxy [e054034899fc22380bdb51d0d8bad840b62eca3b01788c8793073a001a4004e5] <==
	* I1212 00:12:52.890926       1 server_others.go:69] "Using iptables proxy"
	I1212 00:12:52.907276       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:12:53.002213       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:12:53.006666       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:12:53.006894       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:12:53.006998       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:12:53.007192       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:12:53.017043       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:12:53.017068       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:12:53.022860       1 config.go:188] "Starting service config controller"
	I1212 00:12:53.023053       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:12:53.023086       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:12:53.023099       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:12:53.024014       1 config.go:315] "Starting node config controller"
	I1212 00:12:53.024024       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:12:53.125251       1 shared_informer.go:318] Caches are synced for node config
	I1212 00:12:53.125283       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:12:53.125337       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [81d8b0b7b78c879cfcea069e18b77937b21d56159dee2d1df8c6ce0afea8e2e6] <==
	* W1212 00:12:35.895900       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:12:35.896061       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:12:35.896422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:12:35.896678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 00:12:35.896603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:12:35.896913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 00:12:35.896650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:12:35.897312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:12:36.711671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:12:36.711711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:12:36.728304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:36.728341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:36.761751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:36.761941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:36.865899       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:12:36.866039       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:12:36.872703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:12:36.872801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:12:36.883975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:36.884090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:36.885537       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:12:36.885623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 00:12:36.911436       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:12:36.911540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 00:12:38.745077       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:15:49 addons-004867 kubelet[1360]: E1212 00:15:49.955195    1360 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zk6d6_default(b1ecb904-3251-4d45-8eef-97587e480a81)\"" pod="default/hello-world-app-5d77478584-zk6d6" podUID="b1ecb904-3251-4d45-8eef-97587e480a81"
	Dec 12 00:15:52 addons-004867 kubelet[1360]: I1212 00:15:52.774893    1360 scope.go:117] "RemoveContainer" containerID="c40c7d1e019557e8bbb1f33c7aec0d5f582fe75d8957e2ea09da3cc733134fc8"
	Dec 12 00:15:52 addons-004867 kubelet[1360]: E1212 00:15:52.775863    1360 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e"
	Dec 12 00:16:01 addons-004867 kubelet[1360]: I1212 00:16:01.692108    1360 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdwlw\" (UniqueName: \"kubernetes.io/projected/8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e-kube-api-access-vdwlw\") pod \"8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e\" (UID: \"8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e\") "
	Dec 12 00:16:01 addons-004867 kubelet[1360]: I1212 00:16:01.697087    1360 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e-kube-api-access-vdwlw" (OuterVolumeSpecName: "kube-api-access-vdwlw") pod "8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e" (UID: "8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e"). InnerVolumeSpecName "kube-api-access-vdwlw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 00:16:01 addons-004867 kubelet[1360]: I1212 00:16:01.792993    1360 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vdwlw\" (UniqueName: \"kubernetes.io/projected/8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e-kube-api-access-vdwlw\") on node \"addons-004867\" DevicePath \"\""
	Dec 12 00:16:01 addons-004867 kubelet[1360]: I1212 00:16:01.983448    1360 scope.go:117] "RemoveContainer" containerID="c40c7d1e019557e8bbb1f33c7aec0d5f582fe75d8957e2ea09da3cc733134fc8"
	Dec 12 00:16:02 addons-004867 kubelet[1360]: I1212 00:16:02.803923    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e" path="/var/lib/kubelet/pods/8fa5bbfa-3ac3-4caa-b8e5-c19f798e7a0e/volumes"
	Dec 12 00:16:03 addons-004867 kubelet[1360]: I1212 00:16:03.775133    1360 scope.go:117] "RemoveContainer" containerID="45f832b77906343538d419aac3cfadc1b4e78aaee0ce800a1ee4cb78a431fdd6"
	Dec 12 00:16:03 addons-004867 kubelet[1360]: I1212 00:16:03.992325    1360 scope.go:117] "RemoveContainer" containerID="45f832b77906343538d419aac3cfadc1b4e78aaee0ce800a1ee4cb78a431fdd6"
	Dec 12 00:16:03 addons-004867 kubelet[1360]: I1212 00:16:03.992745    1360 scope.go:117] "RemoveContainer" containerID="6e6c8206c996db4ead8d810610b273c1aba62038c968e5c7b26b5d7453b17a69"
	Dec 12 00:16:03 addons-004867 kubelet[1360]: E1212 00:16:03.993066    1360 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zk6d6_default(b1ecb904-3251-4d45-8eef-97587e480a81)\"" pod="default/hello-world-app-5d77478584-zk6d6" podUID="b1ecb904-3251-4d45-8eef-97587e480a81"
	Dec 12 00:16:04 addons-004867 kubelet[1360]: I1212 00:16:04.777919    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="80b69d6d-a1a0-4073-828c-770d2c0a2012" path="/var/lib/kubelet/pods/80b69d6d-a1a0-4073-828c-770d2c0a2012/volumes"
	Dec 12 00:16:04 addons-004867 kubelet[1360]: I1212 00:16:04.778295    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8c1b8433-132f-4e28-abf8-f510ab29705a" path="/var/lib/kubelet/pods/8c1b8433-132f-4e28-abf8-f510ab29705a/volumes"
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.012252    1360 scope.go:117] "RemoveContainer" containerID="fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46"
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.023357    1360 scope.go:117] "RemoveContainer" containerID="fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46"
	Dec 12 00:16:06 addons-004867 kubelet[1360]: E1212 00:16:06.023961    1360 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\": not found" containerID="fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46"
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.024014    1360 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46"} err="failed to get container status \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd80d11fb5f8cd0bdaa8367a1df6e51f8f72ffba0306c61293348c5e83b26b46\": not found"
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.025290    1360 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7hdj\" (UniqueName: \"kubernetes.io/projected/b79f23a5-75ab-4648-b5ac-27ae3593459e-kube-api-access-w7hdj\") pod \"b79f23a5-75ab-4648-b5ac-27ae3593459e\" (UID: \"b79f23a5-75ab-4648-b5ac-27ae3593459e\") "
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.025335    1360 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b79f23a5-75ab-4648-b5ac-27ae3593459e-webhook-cert\") pod \"b79f23a5-75ab-4648-b5ac-27ae3593459e\" (UID: \"b79f23a5-75ab-4648-b5ac-27ae3593459e\") "
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.027990    1360 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79f23a5-75ab-4648-b5ac-27ae3593459e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b79f23a5-75ab-4648-b5ac-27ae3593459e" (UID: "b79f23a5-75ab-4648-b5ac-27ae3593459e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.031239    1360 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79f23a5-75ab-4648-b5ac-27ae3593459e-kube-api-access-w7hdj" (OuterVolumeSpecName: "kube-api-access-w7hdj") pod "b79f23a5-75ab-4648-b5ac-27ae3593459e" (UID: "b79f23a5-75ab-4648-b5ac-27ae3593459e"). InnerVolumeSpecName "kube-api-access-w7hdj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.126552    1360 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w7hdj\" (UniqueName: \"kubernetes.io/projected/b79f23a5-75ab-4648-b5ac-27ae3593459e-kube-api-access-w7hdj\") on node \"addons-004867\" DevicePath \"\""
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.126596    1360 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b79f23a5-75ab-4648-b5ac-27ae3593459e-webhook-cert\") on node \"addons-004867\" DevicePath \"\""
	Dec 12 00:16:06 addons-004867 kubelet[1360]: I1212 00:16:06.776789    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b79f23a5-75ab-4648-b5ac-27ae3593459e" path="/var/lib/kubelet/pods/b79f23a5-75ab-4648-b5ac-27ae3593459e/volumes"
	
	* 
	* ==> storage-provisioner [1ffa5370726569f8005ae3802ffe51c6fb25322e544fdacad5e578b17c9e12f4] <==
	* I1212 00:12:57.657110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:12:57.694539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:12:57.694631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:12:57.792938       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:12:57.793348       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-004867_1126c3e2-7257-42e6-b1d9-73df951e40dc!
	I1212 00:12:57.794738       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d64ac73a-4eff-4e05-89bd-41e0316696af", APIVersion:"v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-004867_1126c3e2-7257-42e6-b1d9-73df951e40dc became leader
	I1212 00:12:57.893723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-004867_1126c3e2-7257-42e6-b1d9-73df951e40dc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-004867 -n addons-004867
helpers_test.go:261: (dbg) Run:  kubectl --context addons-004867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.29s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (17.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-204186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:19:53.288864 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-204186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (15.329158825s)

                                                
                                                
-- stdout --
	* [functional-204186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node functional-204186 in cluster functional-204186
	* Pulling base image ...
	* Updating the running docker "functional-204186" container ...
	* Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:19:53.194819 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.195218 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.195504 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.195789 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.196108 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.196405 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.208443 1166783 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.334327 1166783 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-204186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 15.329341503s for "functional-204186" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-204186
helpers_test.go:235: (dbg) docker inspect functional-204186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31",
	        "Created": "2023-12-12T00:18:25.221497989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1163078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:18:25.55807554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/hosts",
	        "LogPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31-json.log",
	        "Name": "/functional-204186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-204186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-204186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281-init/diff:/var/lib/docker/overlay2/83f94b9f515065f4cf4d4337d1fbe3fc13b585131a89a52ad8eb2b6bf7d119ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-204186",
	                "Source": "/var/lib/docker/volumes/functional-204186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-204186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-204186",
	                "name.minikube.sigs.k8s.io": "functional-204186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f06969edef670514b05008e5de9ef1c1b17b7cfbdaf03c893731542632a1c35",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34042"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34039"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34041"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34040"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1f06969edef6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-204186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7cfe39aaf2d8",
	                        "functional-204186"
	                    ],
	                    "NetworkID": "6ba4ac6be618f8f1444cda50bb12d14c77e16c004975f4866f6cf01acb655fe8",
	                    "EndpointID": "4e500a345b2e632c078524e976c542a16025a1d15c3a51f19fb2c9cb3755c9b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-204186 -n functional-204186
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-204186 -n functional-204186: exit status 2 (342.841489ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 logs -n 25: (1.644116297s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-111319                                                         | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:19 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | minikube-local-cache-test:functional-204186                              |                   |         |         |                     |                     |
	| cache   | functional-204186 cache delete                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | minikube-local-cache-test:functional-204186                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	| ssh     | functional-204186 ssh sudo                                               | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-204186                                                        | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh                                                    | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-204186 cache reload                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	| ssh     | functional-204186 ssh                                                    | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-204186 kubectl --                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | --context functional-204186                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:19:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:19:38.104221 1166783 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:19:38.104406 1166783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:19:38.104410 1166783 out.go:309] Setting ErrFile to fd 2...
	I1212 00:19:38.104415 1166783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:19:38.104683 1166783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:19:38.105092 1166783 out.go:303] Setting JSON to false
	I1212 00:19:38.106053 1166783 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25325,"bootTime":1702315053,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:19:38.106118 1166783 start.go:138] virtualization:  
	I1212 00:19:38.108824 1166783 out.go:177] * [functional-204186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:19:38.111872 1166783 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:19:38.114135 1166783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:19:38.112023 1166783 notify.go:220] Checking for updates...
	I1212 00:19:38.117202 1166783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:19:38.119664 1166783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:19:38.122229 1166783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:19:38.124644 1166783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:19:38.127615 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:38.127742 1166783 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:19:38.155131 1166783 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:19:38.155239 1166783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:19:38.234235 1166783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:19:38.224036211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:19:38.234326 1166783 docker.go:295] overlay module found
	I1212 00:19:38.236723 1166783 out.go:177] * Using the docker driver based on existing profile
	I1212 00:19:38.239483 1166783 start.go:298] selected driver: docker
	I1212 00:19:38.239491 1166783 start.go:902] validating driver "docker" against &{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:38.239572 1166783 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:19:38.239692 1166783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:19:38.328217 1166783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:19:38.318818701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:19:38.328604 1166783 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:19:38.328648 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:38.328655 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:38.328667 1166783 start_flags.go:323] config:
	{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:38.331267 1166783 out.go:177] * Starting control plane node functional-204186 in cluster functional-204186
	I1212 00:19:38.333309 1166783 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:19:38.335076 1166783 out.go:177] * Pulling base image ...
	I1212 00:19:38.336829 1166783 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:19:38.336887 1166783 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1212 00:19:38.336894 1166783 cache.go:56] Caching tarball of preloaded images
	I1212 00:19:38.336924 1166783 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:19:38.336994 1166783 preload.go:174] Found /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:19:38.337003 1166783 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1212 00:19:38.337114 1166783 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/config.json ...
	I1212 00:19:38.354767 1166783 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:19:38.354782 1166783 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:19:38.354805 1166783 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:19:38.354850 1166783 start.go:365] acquiring machines lock for functional-204186: {Name:mk52ac4d0a7302cc0a39b0bd3e6a9baa9621f9b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:19:38.354922 1166783 start.go:369] acquired machines lock for "functional-204186" in 52.545µs
	I1212 00:19:38.354940 1166783 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:19:38.354946 1166783 fix.go:54] fixHost starting: 
	I1212 00:19:38.355276 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:38.374303 1166783 fix.go:102] recreateIfNeeded on functional-204186: state=Running err=<nil>
	W1212 00:19:38.374326 1166783 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:19:38.377049 1166783 out.go:177] * Updating the running docker "functional-204186" container ...
	I1212 00:19:38.379542 1166783 machine.go:88] provisioning docker machine ...
	I1212 00:19:38.379560 1166783 ubuntu.go:169] provisioning hostname "functional-204186"
	I1212 00:19:38.379653 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:38.400809 1166783 main.go:141] libmachine: Using SSH client type: native
	I1212 00:19:38.401311 1166783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34043 <nil> <nil>}
	I1212 00:19:38.401325 1166783 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-204186 && echo "functional-204186" | sudo tee /etc/hostname
	I1212 00:19:38.558711 1166783 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-204186
	
	I1212 00:19:38.558784 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:38.582856 1166783 main.go:141] libmachine: Using SSH client type: native
	I1212 00:19:38.583296 1166783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34043 <nil> <nil>}
	I1212 00:19:38.583341 1166783 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-204186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-204186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-204186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:19:38.724752 1166783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:19:38.724773 1166783 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1135857/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1135857/.minikube}
	I1212 00:19:38.724797 1166783 ubuntu.go:177] setting up certificates
	I1212 00:19:38.724805 1166783 provision.go:83] configureAuth start
	I1212 00:19:38.724870 1166783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-204186
	I1212 00:19:38.743972 1166783 provision.go:138] copyHostCerts
	I1212 00:19:38.744040 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem, removing ...
	I1212 00:19:38.744067 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem
	I1212 00:19:38.744143 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem (1078 bytes)
	I1212 00:19:38.744245 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem, removing ...
	I1212 00:19:38.744249 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem
	I1212 00:19:38.744273 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem (1123 bytes)
	I1212 00:19:38.744330 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem, removing ...
	I1212 00:19:38.744335 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem
	I1212 00:19:38.744358 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem (1675 bytes)
	I1212 00:19:38.744406 1166783 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem org=jenkins.functional-204186 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-204186]
	I1212 00:19:39.317206 1166783 provision.go:172] copyRemoteCerts
	I1212 00:19:39.317258 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:19:39.317326 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.337099 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.437908 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 00:19:39.468125 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:19:39.498465 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:19:39.527513 1166783 provision.go:86] duration metric: configureAuth took 802.695673ms
	I1212 00:19:39.527531 1166783 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:19:39.527738 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:39.527750 1166783 machine.go:91] provisioned docker machine in 1.148193061s
	I1212 00:19:39.527756 1166783 start.go:300] post-start starting for "functional-204186" (driver="docker")
	I1212 00:19:39.527765 1166783 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:19:39.527814 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:19:39.527849 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.546029 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.646100 1166783 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:19:39.650558 1166783 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:19:39.650583 1166783 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:19:39.650596 1166783 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:19:39.650602 1166783 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:19:39.650611 1166783 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/addons for local assets ...
	I1212 00:19:39.650666 1166783 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/files for local assets ...
	I1212 00:19:39.650748 1166783 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem -> 11412812.pem in /etc/ssl/certs
	I1212 00:19:39.650824 1166783 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts -> hosts in /etc/test/nested/copy/1141281
	I1212 00:19:39.650866 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1141281
	I1212 00:19:39.662106 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:19:39.691766 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts --> /etc/test/nested/copy/1141281/hosts (40 bytes)
	I1212 00:19:39.720033 1166783 start.go:303] post-start completed in 192.262029ms
	I1212 00:19:39.720120 1166783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:19:39.720158 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.738794 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.833772 1166783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:19:39.841224 1166783 fix.go:56] fixHost completed within 1.486270012s
	I1212 00:19:39.841239 1166783 start.go:83] releasing machines lock for "functional-204186", held for 1.486310422s
	I1212 00:19:39.841305 1166783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-204186
	I1212 00:19:39.862046 1166783 ssh_runner.go:195] Run: cat /version.json
	I1212 00:19:39.862101 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.862350 1166783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:19:39.862409 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.888120 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.889925 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.988243 1166783 ssh_runner.go:195] Run: systemctl --version
	I1212 00:19:40.123119 1166783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:19:40.130619 1166783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1212 00:19:40.156770 1166783 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:19:40.156843 1166783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:19:40.168259 1166783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:19:40.168274 1166783 start.go:475] detecting cgroup driver to use...
	I1212 00:19:40.168327 1166783 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:19:40.168376 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:19:40.185332 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:19:40.200998 1166783 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:19:40.201062 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:19:40.219091 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:19:40.236047 1166783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:19:40.377539 1166783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:19:40.507741 1166783 docker.go:219] disabling docker service ...
	I1212 00:19:40.507815 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:19:40.525312 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:19:40.541366 1166783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:19:40.671172 1166783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:19:40.800459 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:19:40.815340 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:19:40.836927 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:19:40.851130 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:19:40.864529 1166783 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:19:40.864600 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:19:40.880806 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:19:40.895794 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:19:40.909131 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:19:40.922165 1166783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:19:40.933419 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:19:40.946768 1166783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:19:40.957556 1166783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:19:40.968029 1166783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:19:41.083587 1166783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:19:41.323366 1166783 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:19:41.323436 1166783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:19:41.331217 1166783 start.go:543] Will wait 60s for crictl version
	I1212 00:19:41.331273 1166783 ssh_runner.go:195] Run: which crictl
	I1212 00:19:41.339039 1166783 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:19:41.383778 1166783 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1212 00:19:41.383846 1166783 ssh_runner.go:195] Run: containerd --version
	I1212 00:19:41.416098 1166783 ssh_runner.go:195] Run: containerd --version
	I1212 00:19:41.448227 1166783 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I1212 00:19:41.450208 1166783 cli_runner.go:164] Run: docker network inspect functional-204186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:19:41.467903 1166783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:19:41.474722 1166783 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:19:41.476837 1166783 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:19:41.476929 1166783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:19:41.521062 1166783 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:19:41.521076 1166783 containerd.go:518] Images already preloaded, skipping extraction
	I1212 00:19:41.521129 1166783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:19:41.563890 1166783 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:19:41.563902 1166783 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:19:41.563972 1166783 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:19:41.605056 1166783 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:19:41.605079 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:41.605088 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:41.605097 1166783 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:19:41.605114 1166783 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-204186 NodeName:functional-204186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:19:41.605243 1166783 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-204186"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:19:41.605308 1166783 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-204186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1212 00:19:41.605374 1166783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:19:41.618178 1166783 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:19:41.618260 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:19:41.629314 1166783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I1212 00:19:41.652600 1166783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:19:41.674858 1166783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I1212 00:19:41.697192 1166783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:19:41.701898 1166783 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186 for IP: 192.168.49.2
	I1212 00:19:41.701928 1166783 certs.go:190] acquiring lock for shared ca certs: {Name:mk518d45f153d561b6d30fa5c8435abd4f573517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:41.702088 1166783 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key
	I1212 00:19:41.702139 1166783 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key
	I1212 00:19:41.702240 1166783 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.key
	I1212 00:19:41.702288 1166783 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.key.dd3b5fb2
	I1212 00:19:41.702322 1166783 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.key
	I1212 00:19:41.702433 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem (1338 bytes)
	W1212 00:19:41.702458 1166783 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281_empty.pem, impossibly tiny 0 bytes
	I1212 00:19:41.702465 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:19:41.702492 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:19:41.702516 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:19:41.702537 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem (1675 bytes)
	I1212 00:19:41.702582 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:19:41.703256 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:19:41.733829 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:19:41.764143 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:19:41.793194 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:19:41.822531 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:19:41.858002 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:19:41.895051 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:19:41.926100 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:19:41.955773 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:19:41.985536 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem --> /usr/share/ca-certificates/1141281.pem (1338 bytes)
	I1212 00:19:42.023297 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /usr/share/ca-certificates/11412812.pem (1708 bytes)
	I1212 00:19:42.056302 1166783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:19:42.081918 1166783 ssh_runner.go:195] Run: openssl version
	I1212 00:19:42.093411 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1141281.pem && ln -fs /usr/share/ca-certificates/1141281.pem /etc/ssl/certs/1141281.pem"
	I1212 00:19:42.109628 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.116307 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:18 /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.116422 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.138269 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1141281.pem /etc/ssl/certs/51391683.0"
	I1212 00:19:42.154200 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11412812.pem && ln -fs /usr/share/ca-certificates/11412812.pem /etc/ssl/certs/11412812.pem"
	I1212 00:19:42.169858 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.176203 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:18 /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.176290 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.189156 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11412812.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:19:42.205051 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:19:42.222308 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.228709 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.228802 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.241802 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:19:42.256130 1166783 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:19:42.262370 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:19:42.272839 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:19:42.283158 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:19:42.292851 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:19:42.302374 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:19:42.312206 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:19:42.322552 1166783 kubeadm.go:404] StartCluster: {Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:42.322645 1166783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:19:42.322716 1166783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:19:42.378783 1166783 cri.go:89] found id: "4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783"
	I1212 00:19:42.378798 1166783 cri.go:89] found id: "9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d"
	I1212 00:19:42.378802 1166783 cri.go:89] found id: "4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743"
	I1212 00:19:42.378807 1166783 cri.go:89] found id: "b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e"
	I1212 00:19:42.378810 1166783 cri.go:89] found id: "7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	I1212 00:19:42.378816 1166783 cri.go:89] found id: "f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8"
	I1212 00:19:42.378820 1166783 cri.go:89] found id: "360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b"
	I1212 00:19:42.378823 1166783 cri.go:89] found id: "fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f"
	I1212 00:19:42.378827 1166783 cri.go:89] found id: "8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	I1212 00:19:42.378841 1166783 cri.go:89] found id: ""
	I1212 00:19:42.378903 1166783 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1212 00:19:42.413648 1166783 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b","pid":1280,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b/rootfs","created":"2023-12-12T00:18:43.008599856Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7f532c4a9c9f164eeeacdb7ee8b121ca"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783","pid":2831,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783/rootfs","created":"2023-12-12T00:19:35.214833678Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f4424a2e-f114-46c8-9059-3ddd8cab9386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743","pid":1892,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743/rootfs","created":"2023-12-12T00:19:04.817909252Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","io.kubernetes.cri.sandbox-name":"kindnet-p7qfc","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80d814ed-cb37-4243-97a7-61169cbf7ae7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","pid":1791,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5695af01bb75b84961173de189b46ab680
eebd75505d2f089d6304cab37f944e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e/rootfs","created":"2023-12-12T00:19:04.517306189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-p7qfc_80d814ed-cb37-4243-97a7-61169cbf7ae7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-p7qfc","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80d814ed-cb37-4243-97a7-61169cbf7ae7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","pid":1153,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672/rootfs","created":"2023-12-12T00:18:42.787933141Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-204186_7f532c4a9c9f164eeeacdb7ee8b121ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7f532c4a9c9f164eeeacdb7ee8b121ca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8204ec74c6c75a7ce2f3c9c385
56fda8152667cef4c5fd6f8c1c0281cb1b67e1","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1/rootfs","created":"2023-12-12T00:18:42.941005004Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bb521992826aaef3b829c57d52661ef"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","pid":1676,"status":"running","bundle":"/run/containerd/io.c
ontainerd.runtime.v2.task/k8s.io/91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b/rootfs","created":"2023-12-12T00:19:04.001976537Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f4424a2e-f114-46c8-9059-3ddd8cab9386","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f4424a2e-f114-46c8-9059-3ddd8cab9386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501
af","pid":1144,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af/rootfs","created":"2023-12-12T00:18:42.767256925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-204186_0bb521992826aaef3b829c57d52661ef","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bb521992826aaef3b829c57d52661ef"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","pid":1186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5/rootfs","created":"2023-12-12T00:18:42.825889148Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-204186_d3927b2e4e82a4e18057da3723e43cc0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubern
etes.cri.sandbox-uid":"d3927b2e4e82a4e18057da3723e43cc0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d","pid":2110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d/rootfs","created":"2023-12-12T00:19:19.034742843Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-75jb5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88486eba-5928-4a3b-b0e2-82572161ba5b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a33d
05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","pid":1752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa/rootfs","created":"2023-12-12T00:19:04.385714277Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xn2hr_17a4a16d-a0cd-45c8-bd8c-da9736f87535","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xn2hr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"17a4a16d-a0cd-45c8-bd8c-da9736f87
535"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","pid":1194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a/rootfs","created":"2023-12-12T00:18:42.861461679Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-204186_fe1cfa1135867fcf7ae120ad770b3e34","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system
","io.kubernetes.cri.sandbox-uid":"fe1cfa1135867fcf7ae120ad770b3e34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e","pid":1817,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e/rootfs","created":"2023-12-12T00:19:04.478952425Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","io.kubernetes.cri.sandbox-name":"kube-proxy-xn2hr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"17a4a16d-a0cd-45c8-bd8c-da9736f87535"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7
ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","pid":2076,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749/rootfs","created":"2023-12-12T00:19:18.939862893Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-75jb5_88486eba-5928-4a3b-b0e2-82572161ba5b","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-75jb5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88486
eba-5928-4a3b-b0e2-82572161ba5b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8","pid":1332,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8/rootfs","created":"2023-12-12T00:18:43.161380898Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","io.kubernetes.cri.sandbox-name":"etcd-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fe1cfa1135867fcf7ae120ad770b3e34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc16
3530fa8b5b88342f","pid":1326,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f/rootfs","created":"2023-12-12T00:18:43.150611388Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d3927b2e4e82a4e18057da3723e43cc0"},"owner":"root"}]
	I1212 00:19:42.413997 1166783 cri.go:126] list returned 16 containers
	I1212 00:19:42.414006 1166783 cri.go:129] container: {ID:360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b Status:running}
	I1212 00:19:42.414019 1166783 cri.go:135] skipping {360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b running}: state = "running", want "paused"
	I1212 00:19:42.414028 1166783 cri.go:129] container: {ID:4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 Status:running}
	I1212 00:19:42.414035 1166783 cri.go:135] skipping {4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 running}: state = "running", want "paused"
	I1212 00:19:42.414041 1166783 cri.go:129] container: {ID:4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 Status:running}
	I1212 00:19:42.414046 1166783 cri.go:135] skipping {4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 running}: state = "running", want "paused"
	I1212 00:19:42.414052 1166783 cri.go:129] container: {ID:5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e Status:running}
	I1212 00:19:42.414058 1166783 cri.go:131] skipping 5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e - not in ps
	I1212 00:19:42.414062 1166783 cri.go:129] container: {ID:74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672 Status:running}
	I1212 00:19:42.414068 1166783 cri.go:131] skipping 74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672 - not in ps
	I1212 00:19:42.414073 1166783 cri.go:129] container: {ID:8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 Status:running}
	I1212 00:19:42.414078 1166783 cri.go:135] skipping {8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 running}: state = "running", want "paused"
	I1212 00:19:42.414083 1166783 cri.go:129] container: {ID:91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b Status:running}
	I1212 00:19:42.414089 1166783 cri.go:131] skipping 91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b - not in ps
	I1212 00:19:42.414093 1166783 cri.go:129] container: {ID:929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af Status:running}
	I1212 00:19:42.414099 1166783 cri.go:131] skipping 929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af - not in ps
	I1212 00:19:42.414103 1166783 cri.go:129] container: {ID:9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5 Status:running}
	I1212 00:19:42.414111 1166783 cri.go:131] skipping 9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5 - not in ps
	I1212 00:19:42.414116 1166783 cri.go:129] container: {ID:9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d Status:running}
	I1212 00:19:42.414121 1166783 cri.go:135] skipping {9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d running}: state = "running", want "paused"
	I1212 00:19:42.414126 1166783 cri.go:129] container: {ID:a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa Status:running}
	I1212 00:19:42.414134 1166783 cri.go:131] skipping a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa - not in ps
	I1212 00:19:42.414138 1166783 cri.go:129] container: {ID:a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a Status:running}
	I1212 00:19:42.414144 1166783 cri.go:131] skipping a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a - not in ps
	I1212 00:19:42.414148 1166783 cri.go:129] container: {ID:b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e Status:running}
	I1212 00:19:42.414154 1166783 cri.go:135] skipping {b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e running}: state = "running", want "paused"
	I1212 00:19:42.414159 1166783 cri.go:129] container: {ID:e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749 Status:running}
	I1212 00:19:42.414165 1166783 cri.go:131] skipping e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749 - not in ps
	I1212 00:19:42.414169 1166783 cri.go:129] container: {ID:f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 Status:running}
	I1212 00:19:42.414175 1166783 cri.go:135] skipping {f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 running}: state = "running", want "paused"
	I1212 00:19:42.414180 1166783 cri.go:129] container: {ID:fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f Status:running}
	I1212 00:19:42.414185 1166783 cri.go:135] skipping {fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f running}: state = "running", want "paused"
	I1212 00:19:42.414238 1166783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:19:42.426395 1166783 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 00:19:42.426407 1166783 kubeadm.go:636] restartCluster start
	I1212 00:19:42.426463 1166783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:19:42.437760 1166783 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:42.438356 1166783 kubeconfig.go:92] found "functional-204186" server: "https://192.168.49.2:8441"
	I1212 00:19:42.440196 1166783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:19:42.451884 1166783 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-12-12 00:18:34.640409327 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-12-12 00:19:41.687950639 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1212 00:19:42.451896 1166783 kubeadm.go:1135] stopping kube-system containers ...
	I1212 00:19:42.451907 1166783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:19:42.451963 1166783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:19:42.496926 1166783 cri.go:89] found id: "4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783"
	I1212 00:19:42.496941 1166783 cri.go:89] found id: "9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d"
	I1212 00:19:42.496946 1166783 cri.go:89] found id: "4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743"
	I1212 00:19:42.496949 1166783 cri.go:89] found id: "b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e"
	I1212 00:19:42.496952 1166783 cri.go:89] found id: "7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	I1212 00:19:42.496956 1166783 cri.go:89] found id: "f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8"
	I1212 00:19:42.496962 1166783 cri.go:89] found id: "360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b"
	I1212 00:19:42.496966 1166783 cri.go:89] found id: "fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f"
	I1212 00:19:42.496969 1166783 cri.go:89] found id: "8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	I1212 00:19:42.496979 1166783 cri.go:89] found id: ""
	I1212 00:19:42.496984 1166783 cri.go:234] Stopping containers: [4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1]
	I1212 00:19:42.497038 1166783 ssh_runner.go:195] Run: which crictl
	I1212 00:19:42.501723 1166783 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1
	I1212 00:19:47.777226 1166783 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1: (5.275464165s)
	W1212 00:19:47.777279 1166783 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1: Process exited with status 1
	stdout:
	4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783
	9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d
	4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743
	b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e
	
	stderr:
	E1212 00:19:47.774088    3356 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": not found" containerID="7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	time="2023-12-12T00:19:47Z" level=fatal msg="stopping the container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": not found"
	I1212 00:19:47.777340 1166783 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:19:47.851952 1166783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:19:47.862994 1166783 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Dec 12 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 12 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Dec 12 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 12 00:18 /etc/kubernetes/scheduler.conf
	
	I1212 00:19:47.863057 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:19:47.874186 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:19:47.886147 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:19:47.897962 1166783 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:47.898020 1166783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:19:47.908984 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:19:47.920492 1166783 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:47.920556 1166783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:19:47.931644 1166783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:19:47.942812 1166783 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 00:19:47.942838 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:48.021789 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.323474 1166783 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.301659419s)
	I1212 00:19:50.323493 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.533360 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.615860 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.703568 1166783 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:19:50.703633 1166783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:19:50.729933 1166783 api_server.go:72] duration metric: took 26.36494ms to wait for apiserver process to appear ...
	I1212 00:19:50.729948 1166783 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:19:50.729964 1166783 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:19:50.741750 1166783 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1212 00:19:50.757498 1166783 api_server.go:141] control plane version: v1.28.4
	I1212 00:19:50.757517 1166783 api_server.go:131] duration metric: took 27.563594ms to wait for apiserver health ...
	I1212 00:19:50.757525 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:50.757531 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:50.760139 1166783 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:19:50.762051 1166783 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:19:50.769174 1166783 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:19:50.769199 1166783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:19:50.799044 1166783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:19:51.250997 1166783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:19:51.259526 1166783 system_pods.go:59] 8 kube-system pods found
	I1212 00:19:51.259544 1166783 system_pods.go:61] "coredns-5dd5756b68-75jb5" [88486eba-5928-4a3b-b0e2-82572161ba5b] Running
	I1212 00:19:51.259548 1166783 system_pods.go:61] "etcd-functional-204186" [22eaa66d-9573-4688-a676-a624f562a069] Running
	I1212 00:19:51.259552 1166783 system_pods.go:61] "kindnet-p7qfc" [80d814ed-cb37-4243-97a7-61169cbf7ae7] Running
	I1212 00:19:51.259556 1166783 system_pods.go:61] "kube-apiserver-functional-204186" [69dc4cd3-92c5-4f67-813d-c38849073058] Running
	I1212 00:19:51.259561 1166783 system_pods.go:61] "kube-controller-manager-functional-204186" [d4482b26-8308-4a6f-8efe-dd15c7689236] Running
	I1212 00:19:51.259568 1166783 system_pods.go:61] "kube-proxy-xn2hr" [17a4a16d-a0cd-45c8-bd8c-da9736f87535] Running
	I1212 00:19:51.259572 1166783 system_pods.go:61] "kube-scheduler-functional-204186" [9b16cc61-09fb-4f6c-af03-029249e6bf3d] Running
	I1212 00:19:51.259579 1166783 system_pods.go:61] "storage-provisioner" [f4424a2e-f114-46c8-9059-3ddd8cab9386] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:19:51.259592 1166783 system_pods.go:74] duration metric: took 8.577845ms to wait for pod list to return data ...
	I1212 00:19:51.259600 1166783 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:19:51.262989 1166783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:19:51.263007 1166783 node_conditions.go:123] node cpu capacity is 2
	I1212 00:19:51.263019 1166783 node_conditions.go:105] duration metric: took 3.412063ms to run NodePressure ...
	I1212 00:19:51.263034 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:51.488154 1166783 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 00:19:51.492965 1166783 retry.go:31] will retry after 311.086978ms: kubelet not initialised
	I1212 00:19:51.856393 1166783 retry.go:31] will retry after 290.962584ms: kubelet not initialised
	I1212 00:19:52.153722 1166783 kubeadm.go:787] kubelet initialised
	I1212 00:19:52.153732 1166783 kubeadm.go:788] duration metric: took 665.564362ms waiting for restarted kubelet to initialise ...
	I1212 00:19:52.153739 1166783 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:19:52.164434 1166783 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.194792 1166783 pod_ready.go:97] error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.194808 1166783 pod_ready.go:81] duration metric: took 1.030359999s waiting for pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.194819 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.194842 1166783 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195194 1166783 pod_ready.go:97] error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195209 1166783 pod_ready.go:81] duration metric: took 359.62µs waiting for pod "etcd-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195218 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195237 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195488 1166783 pod_ready.go:97] error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195496 1166783 pod_ready.go:81] duration metric: took 253.176µs waiting for pod "kube-apiserver-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195504 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195523 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195772 1166783 pod_ready.go:97] error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195781 1166783 pod_ready.go:81] duration metric: took 252.076µs waiting for pod "kube-controller-manager-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195789 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195811 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xn2hr" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.196092 1166783 pod_ready.go:97] error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196100 1166783 pod_ready.go:81] duration metric: took 283.485µs waiting for pod "kube-proxy-xn2hr" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.196108 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196128 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.196388 1166783 pod_ready.go:97] error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196397 1166783 pod_ready.go:81] duration metric: took 241.82µs waiting for pod "kube-scheduler-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.196405 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196422 1166783 pod_ready.go:38] duration metric: took 1.042674859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:19:53.196436 1166783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W1212 00:19:53.205857 1166783 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I1212 00:19:53.205869 1166783 kubeadm.go:640] restartCluster took 10.779458014s
	I1212 00:19:53.205876 1166783 kubeadm.go:406] StartCluster complete in 10.883351408s
	I1212 00:19:53.205889 1166783 settings.go:142] acquiring lock: {Name:mk888158b3cbabbb2583b6a6f74ff62a9621d5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:53.205956 1166783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:19:53.206588 1166783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/kubeconfig: {Name:mkea8ea25a391ae5db2568a02e638c76b0d6995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:53.206816 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:19:53.207101 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:53.207263 1166783 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:19:53.207360 1166783 addons.go:69] Setting storage-provisioner=true in profile "functional-204186"
	I1212 00:19:53.207373 1166783 addons.go:231] Setting addon storage-provisioner=true in "functional-204186"
	W1212 00:19:53.207379 1166783 addons.go:240] addon storage-provisioner should already be in state true
	I1212 00:19:53.207445 1166783 host.go:66] Checking if "functional-204186" exists ...
	I1212 00:19:53.207821 1166783 addons.go:69] Setting default-storageclass=true in profile "functional-204186"
	I1212 00:19:53.207836 1166783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-204186"
	I1212 00:19:53.207860 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:53.208104 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	W1212 00:19:53.208431 1166783 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-204186" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.208443 1166783 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.208513 1166783 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:19:53.213129 1166783 out.go:177] * Verifying Kubernetes components...
	I1212 00:19:53.218222 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:19:53.249929 1166783 addons.go:231] Setting addon default-storageclass=true in "functional-204186"
	W1212 00:19:53.249941 1166783 addons.go:240] addon default-storageclass should already be in state true
	I1212 00:19:53.249963 1166783 host.go:66] Checking if "functional-204186" exists ...
	I1212 00:19:53.250432 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:53.268837 1166783 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:19:53.270964 1166783 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:19:53.270978 1166783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:19:53.271046 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:53.291691 1166783 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:19:53.291703 1166783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:19:53.291762 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:53.316806 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	E1212 00:19:53.334327 1166783 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1212 00:19:53.334349 1166783 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1212 00:19:53.334363 1166783 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1212 00:19:53.334502 1166783 node_ready.go:35] waiting up to 6m0s for node "functional-204186" to be "Ready" ...
	I1212 00:19:53.334838 1166783 node_ready.go:53] error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.334851 1166783 node_ready.go:38] duration metric: took 337.762µs waiting for node "functional-204186" to be "Ready" ...
	I1212 00:19:53.338808 1166783 out.go:177] 
	W1212 00:19:53.341024 1166783 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:53.341047 1166783 out.go:239] * 
	W1212 00:19:53.342114 1166783 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:19:53.345121 1166783 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0abe98be43702       97e04611ad434       2 seconds ago        Running             coredns                   1                   e7ee9926b7666       coredns-5dd5756b68-75jb5
	35d39f988885b       ba04bb24b9575       2 seconds ago        Running             storage-provisioner       2                   91c1e8e748144       storage-provisioner
	0be1a7ad1cf29       3ca3ca488cf13       2 seconds ago        Running             kube-proxy                1                   a33d05ee8738d       kube-proxy-xn2hr
	11a13b0a859b6       04b4eaa3d3db8       2 seconds ago        Running             kindnet-cni               1                   5695af01bb75b       kindnet-p7qfc
	ec98dd60a37e1       04b4c447bb9d4       2 seconds ago        Exited              kube-apiserver            1                   f426cbf93d3b8       kube-apiserver-functional-204186
	4b48b0124d6a9       ba04bb24b9575       19 seconds ago       Exited              storage-provisioner       1                   91c1e8e748144       storage-provisioner
	9be15f3092c17       97e04611ad434       35 seconds ago       Exited              coredns                   0                   e7ee9926b7666       coredns-5dd5756b68-75jb5
	4c3518b0312ab       04b4eaa3d3db8       49 seconds ago       Exited              kindnet-cni               0                   5695af01bb75b       kindnet-p7qfc
	b10a99a14fe0a       3ca3ca488cf13       50 seconds ago       Exited              kube-proxy                0                   a33d05ee8738d       kube-proxy-xn2hr
	f1bf1c4332d38       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   a67f2d7f88b70       etcd-functional-204186
	360b7493b53a0       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   74d87972b5980       kube-controller-manager-functional-204186
	fdcc7d847a538       05c284c929889       About a minute ago   Running             kube-scheduler            0                   9832a28d5e6be       kube-scheduler-functional-204186
	
	* 
	* ==> containerd <==
	* Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.357740880Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.378507754Z" level=info msg="StartContainer for \"0abe98be437028e07d1455f24fbb28b5834240a4e81135bf4c59ed7090a55ce6\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.394345783Z" level=info msg="StartContainer for \"11a13b0a859b60db60e77078ff0a9c0fd3bf66498e48a7c2d9e4bb2e192725e2\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.415127565Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3873 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.418758457Z" level=info msg="StartContainer for \"0be1a7ad1cf29b232d9b2f17256ac8b257d11a9c4160fe02383d37cae2b804bc\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.899780450Z" level=info msg="StopContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" with timeout 2 (s)"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.900933615Z" level=info msg="Stop container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" with signal terminated"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957937189Z" level=info msg="shim disconnected" id=929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957985557Z" level=warning msg="cleaning up after shim disconnected" id=929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af namespace=k8s.io
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957995765Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.979783726Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4061 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.984347576Z" level=info msg="RemoveContainer for \"5b2014e0c953df23c937e404c551a12c7f253506c7e09be700cf94c74ebf812f\""
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.989771654Z" level=info msg="RemoveContainer for \"5b2014e0c953df23c937e404c551a12c7f253506c7e09be700cf94c74ebf812f\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012811357Z" level=info msg="shim disconnected" id=8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012885572Z" level=warning msg="cleaning up after shim disconnected" id=8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 namespace=k8s.io
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012899808Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.024366134Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4086 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.027684749Z" level=info msg="StopContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028454754Z" level=info msg="StopPodSandbox for \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\""
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028552870Z" level=info msg="Container to stop \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028768507Z" level=info msg="TearDown network for sandbox \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\" successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028788060Z" level=info msg="StopPodSandbox for \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.994452895Z" level=info msg="RemoveContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\""
	Dec 12 00:19:54 functional-204186 containerd[3161]: time="2023-12-12T00:19:54.006865232Z" level=info msg="RemoveContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" returns successfully"
	Dec 12 00:19:54 functional-204186 containerd[3161]: time="2023-12-12T00:19:54.010093116Z" level=error msg="ContainerStatus for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": not found"
	
	* 
	* ==> coredns [0abe98be437028e07d1455f24fbb28b5834240a4e81135bf4c59ed7090a55ce6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47675 - 44376 "HINFO IN 8293000808179795944.5557927990981517027. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012825832s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43954 - 51804 "HINFO IN 7258703346742299720.7667376483180762445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013624227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001103] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001114] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +0.004970] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=000000001bb038f1
	[  +0.001195] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000760] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000ce236adb
	[  +0.001178] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +3.628923] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001149] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000c19aa351
	[  +0.001199] FS-Cache: O-key=[8] '4f3e5c0100000000'
	[  +0.000795] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001065] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001178] FS-Cache: N-key=[8] '4f3e5c0100000000'
	[  +0.413575] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000d9ff942f
	[  +0.001137] FS-Cache: O-key=[8] '553e5c0100000000'
	[  +0.000730] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001098] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=000000008357462d
	[  +0.001241] FS-Cache: N-key=[8] '553e5c0100000000'
	
	* 
	* ==> etcd [f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8] <==
	* {"level":"info","ts":"2023-12-12T00:18:43.480306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-12T00:18:43.487446Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-12T00:18:43.489297Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T00:18:43.489605Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:18:43.495477Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:18:43.496288Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T00:18:43.496421Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:18:44.017135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.019999Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-204186 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:18:44.020176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:18:44.025361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-12T00:18:44.025687Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.03183Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:18:44.033017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:18:44.043361Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:18:44.075781Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:19:54 up  7:02,  0 users,  load average: 1.58, 1.26, 0.74
	Linux functional-204186 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [11a13b0a859b60db60e77078ff0a9c0fd3bf66498e48a7c2d9e4bb2e192725e2] <==
	* I1212 00:19:52.432050       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:19:52.432116       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:19:52.432252       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:19:52.432267       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:19:52.432281       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:19:52.821571       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:52.821602       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743] <==
	* I1212 00:19:04.921582       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:19:04.921651       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:19:04.921815       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:19:04.921830       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:19:04.921862       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:19:05.417437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:05.417644       1 main.go:227] handling current node
	I1212 00:19:15.430336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:15.430366       1 main.go:227] handling current node
	I1212 00:19:25.443764       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:25.443792       1 main.go:227] handling current node
	I1212 00:19:35.447984       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:35.448411       1 main.go:227] handling current node
	I1212 00:19:45.457404       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:45.457445       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2] <==
	* I1212 00:19:52.287856       1 options.go:220] external host was not specified, using 192.168.49.2
	I1212 00:19:52.289191       1 server.go:148] Version: v1.28.4
	I1212 00:19:52.289333       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1212 00:19:52.295465       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b] <==
	* I1212 00:19:01.913164       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:19:02.207880       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xn2hr"
	I1212 00:19:02.236055       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p7qfc"
	I1212 00:19:02.340642       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:19:02.370173       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:19:02.370208       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:19:02.402848       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 00:19:02.492956       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 00:19:02.782766       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zssw5"
	I1212 00:19:02.793880       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-75jb5"
	I1212 00:19:02.839458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="437.553609ms"
	I1212 00:19:02.859056       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-zssw5"
	I1212 00:19:02.870835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.305148ms"
	I1212 00:19:02.890752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.754212ms"
	I1212 00:19:02.911932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.123482ms"
	I1212 00:19:02.912115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.549µs"
	I1212 00:19:04.058663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.954µs"
	I1212 00:19:04.066980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.04µs"
	I1212 00:19:04.074159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.197µs"
	I1212 00:19:19.096666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.778µs"
	I1212 00:19:20.108699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.224824ms"
	I1212 00:19:20.110382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.409µs"
	I1212 00:19:20.110809       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1212 00:19:51.900629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.264291ms"
	I1212 00:19:51.900757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.154µs"
	
	* 
	* ==> kube-proxy [0be1a7ad1cf29b232d9b2f17256ac8b257d11a9c4160fe02383d37cae2b804bc] <==
	* I1212 00:19:52.505384       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:19:52.507881       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:19:52.507920       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:19:52.507929       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:19:52.508139       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:19:52.508448       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:19:52.508466       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:19:52.509572       1 config.go:188] "Starting service config controller"
	I1212 00:19:52.509727       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:19:52.509823       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:19:52.509839       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:19:52.510562       1 config.go:315] "Starting node config controller"
	I1212 00:19:52.510576       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:19:52.610589       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:19:52.610596       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:19:52.610644       1 shared_informer.go:318] Caches are synced for node config
	W1212 00:19:52.978692       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:52.978734       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:52.978764       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:53.789245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.789317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:54.120973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:54.121036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:54.515895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:54.515945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-proxy [b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e] <==
	* I1212 00:19:04.559125       1 server_others.go:69] "Using iptables proxy"
	I1212 00:19:04.576426       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:19:04.599481       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:19:04.601715       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:19:04.601899       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:19:04.601984       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:19:04.602161       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:19:04.602462       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:19:04.602795       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:19:04.604008       1 config.go:188] "Starting service config controller"
	I1212 00:19:04.604321       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:19:04.604499       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:19:04.604575       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:19:04.606449       1 config.go:315] "Starting node config controller"
	I1212 00:19:04.606600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:19:04.705005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:19:04.705039       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:19:04.706747       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f] <==
	* W1212 00:18:46.916161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:18:46.916178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 00:18:46.916225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:18:46.916240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:18:46.916287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:18:46.916302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:18:46.916358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:46.916373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:46.916558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:18:46.916578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:18:47.752991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:47.753339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:47.752996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:18:47.753590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:18:47.793668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:18:47.793878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:18:47.884125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:18:47.884158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 00:18:47.908838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:18:47.909070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:18:47.984156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:47.984297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:48.049219       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:18:48.049625       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 00:18:50.203649       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:53.995901    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:53.996181    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:53.996405    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:53.996632    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:53.997036    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:53.997128    3542 scope.go:117] "RemoveContainer" containerID="ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: E1212 00:19:53.998020    3542 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-204186_kube-system(102fdf4414c3e8f4b2b76c9e617d21ca)\"" pod="kube-system/kube-apiserver-functional-204186" podUID="102fdf4414c3e8f4b2b76c9e617d21ca"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.009610    3542 scope.go:117] "RemoveContainer" containerID="8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: E1212 00:19:54.010428    3542 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": not found" containerID="8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.010520    3542 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"} err="failed to get container status \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": not found"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.151566    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.152082    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.152486    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.152879    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.153267    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.153670    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.893882    3542 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0bb521992826aaef3b829c57d52661ef" path="/var/lib/kubelet/pods/0bb521992826aaef3b829c57d52661ef/volumes"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:54.999969    3542 scope.go:117] "RemoveContainer" containerID="ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: E1212 00:19:55.000582    3542 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-204186_kube-system(102fdf4414c3e8f4b2b76c9e617d21ca)\"" pod="kube-system/kube-apiserver-functional-204186" podUID="102fdf4414c3e8f4b2b76c9e617d21ca"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.000753    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001101    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001318    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001515    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001680    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001855    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	* 
	* ==> storage-provisioner [35d39f988885b4b77d9b8fd4e6fd28e8cd51d3db66cdc048ce6c8b9a7ab9d5d3] <==
	* I1212 00:19:52.288774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:19:52.305023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:19:52.305089       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783] <==
	* I1212 00:19:35.251844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:19:35.267237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:19:35.267809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:19:35.292941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:19:35.293350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59!
	I1212 00:19:35.293575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dce0da4-55da-4699-8143-5da0ecbb7ad6", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59 became leader
	I1212 00:19:35.394357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:19:54.751268 1168187 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-204186 -n functional-204186
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-204186 -n functional-204186: exit status 2 (381.062446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-204186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (17.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-204186 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-204186 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (65.646657ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-204186 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-204186
helpers_test.go:235: (dbg) docker inspect functional-204186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31",
	        "Created": "2023-12-12T00:18:25.221497989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1163078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:18:25.55807554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/hosts",
	        "LogPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31-json.log",
	        "Name": "/functional-204186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-204186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-204186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281-init/diff:/var/lib/docker/overlay2/83f94b9f515065f4cf4d4337d1fbe3fc13b585131a89a52ad8eb2b6bf7d119ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-204186",
	                "Source": "/var/lib/docker/volumes/functional-204186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-204186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-204186",
	                "name.minikube.sigs.k8s.io": "functional-204186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f06969edef670514b05008e5de9ef1c1b17b7cfbdaf03c893731542632a1c35",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34042"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34039"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34041"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34040"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1f06969edef6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-204186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7cfe39aaf2d8",
	                        "functional-204186"
	                    ],
	                    "NetworkID": "6ba4ac6be618f8f1444cda50bb12d14c77e16c004975f4866f6cf01acb655fe8",
	                    "EndpointID": "4e500a345b2e632c078524e976c542a16025a1d15c3a51f19fb2c9cb3755c9b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-204186 -n functional-204186
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-204186 -n functional-204186: exit status 2 (343.912866ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 logs -n 25: (1.592055918s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-111319 --log_dir                                                  | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	|         | /tmp/nospam-111319 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-111319                                                         | nospam-111319     | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:18 UTC |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:18 UTC | 12 Dec 23 00:19 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-204186 cache add                                              | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | minikube-local-cache-test:functional-204186                              |                   |         |         |                     |                     |
	| cache   | functional-204186 cache delete                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | minikube-local-cache-test:functional-204186                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	| ssh     | functional-204186 ssh sudo                                               | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-204186                                                        | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh                                                    | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-204186 cache reload                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	| ssh     | functional-204186 ssh                                                    | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-204186 kubectl --                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | --context functional-204186                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:19:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:19:38.104221 1166783 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:19:38.104406 1166783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:19:38.104410 1166783 out.go:309] Setting ErrFile to fd 2...
	I1212 00:19:38.104415 1166783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:19:38.104683 1166783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:19:38.105092 1166783 out.go:303] Setting JSON to false
	I1212 00:19:38.106053 1166783 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25325,"bootTime":1702315053,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:19:38.106118 1166783 start.go:138] virtualization:  
	I1212 00:19:38.108824 1166783 out.go:177] * [functional-204186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:19:38.111872 1166783 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:19:38.114135 1166783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:19:38.112023 1166783 notify.go:220] Checking for updates...
	I1212 00:19:38.117202 1166783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:19:38.119664 1166783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:19:38.122229 1166783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:19:38.124644 1166783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:19:38.127615 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:38.127742 1166783 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:19:38.155131 1166783 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:19:38.155239 1166783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:19:38.234235 1166783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:19:38.224036211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:19:38.234326 1166783 docker.go:295] overlay module found
	I1212 00:19:38.236723 1166783 out.go:177] * Using the docker driver based on existing profile
	I1212 00:19:38.239483 1166783 start.go:298] selected driver: docker
	I1212 00:19:38.239491 1166783 start.go:902] validating driver "docker" against &{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:38.239572 1166783 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:19:38.239692 1166783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:19:38.328217 1166783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:19:38.318818701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:19:38.328604 1166783 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:19:38.328648 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:38.328655 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:38.328667 1166783 start_flags.go:323] config:
	{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:38.331267 1166783 out.go:177] * Starting control plane node functional-204186 in cluster functional-204186
	I1212 00:19:38.333309 1166783 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:19:38.335076 1166783 out.go:177] * Pulling base image ...
	I1212 00:19:38.336829 1166783 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:19:38.336887 1166783 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1212 00:19:38.336894 1166783 cache.go:56] Caching tarball of preloaded images
	I1212 00:19:38.336924 1166783 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:19:38.336994 1166783 preload.go:174] Found /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:19:38.337003 1166783 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1212 00:19:38.337114 1166783 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/config.json ...
	I1212 00:19:38.354767 1166783 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:19:38.354782 1166783 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:19:38.354805 1166783 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:19:38.354850 1166783 start.go:365] acquiring machines lock for functional-204186: {Name:mk52ac4d0a7302cc0a39b0bd3e6a9baa9621f9b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:19:38.354922 1166783 start.go:369] acquired machines lock for "functional-204186" in 52.545µs
	I1212 00:19:38.354940 1166783 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:19:38.354946 1166783 fix.go:54] fixHost starting: 
	I1212 00:19:38.355276 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:38.374303 1166783 fix.go:102] recreateIfNeeded on functional-204186: state=Running err=<nil>
	W1212 00:19:38.374326 1166783 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:19:38.377049 1166783 out.go:177] * Updating the running docker "functional-204186" container ...
	I1212 00:19:38.379542 1166783 machine.go:88] provisioning docker machine ...
	I1212 00:19:38.379560 1166783 ubuntu.go:169] provisioning hostname "functional-204186"
	I1212 00:19:38.379653 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:38.400809 1166783 main.go:141] libmachine: Using SSH client type: native
	I1212 00:19:38.401311 1166783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34043 <nil> <nil>}
	I1212 00:19:38.401325 1166783 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-204186 && echo "functional-204186" | sudo tee /etc/hostname
	I1212 00:19:38.558711 1166783 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-204186
	
	I1212 00:19:38.558784 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:38.582856 1166783 main.go:141] libmachine: Using SSH client type: native
	I1212 00:19:38.583296 1166783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34043 <nil> <nil>}
	I1212 00:19:38.583341 1166783 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-204186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-204186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-204186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:19:38.724752 1166783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:19:38.724773 1166783 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1135857/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1135857/.minikube}
	I1212 00:19:38.724797 1166783 ubuntu.go:177] setting up certificates
	I1212 00:19:38.724805 1166783 provision.go:83] configureAuth start
	I1212 00:19:38.724870 1166783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-204186
	I1212 00:19:38.743972 1166783 provision.go:138] copyHostCerts
	I1212 00:19:38.744040 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem, removing ...
	I1212 00:19:38.744067 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem
	I1212 00:19:38.744143 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem (1078 bytes)
	I1212 00:19:38.744245 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem, removing ...
	I1212 00:19:38.744249 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem
	I1212 00:19:38.744273 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem (1123 bytes)
	I1212 00:19:38.744330 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem, removing ...
	I1212 00:19:38.744335 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem
	I1212 00:19:38.744358 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem (1675 bytes)
	I1212 00:19:38.744406 1166783 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem org=jenkins.functional-204186 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-204186]
	I1212 00:19:39.317206 1166783 provision.go:172] copyRemoteCerts
	I1212 00:19:39.317258 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:19:39.317326 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.337099 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.437908 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 00:19:39.468125 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:19:39.498465 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:19:39.527513 1166783 provision.go:86] duration metric: configureAuth took 802.695673ms
	I1212 00:19:39.527531 1166783 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:19:39.527738 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:39.527750 1166783 machine.go:91] provisioned docker machine in 1.148193061s
	I1212 00:19:39.527756 1166783 start.go:300] post-start starting for "functional-204186" (driver="docker")
	I1212 00:19:39.527765 1166783 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:19:39.527814 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:19:39.527849 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.546029 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.646100 1166783 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:19:39.650558 1166783 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:19:39.650583 1166783 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:19:39.650596 1166783 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:19:39.650602 1166783 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:19:39.650611 1166783 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/addons for local assets ...
	I1212 00:19:39.650666 1166783 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/files for local assets ...
	I1212 00:19:39.650748 1166783 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem -> 11412812.pem in /etc/ssl/certs
	I1212 00:19:39.650824 1166783 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts -> hosts in /etc/test/nested/copy/1141281
	I1212 00:19:39.650866 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1141281
	I1212 00:19:39.662106 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:19:39.691766 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts --> /etc/test/nested/copy/1141281/hosts (40 bytes)
	I1212 00:19:39.720033 1166783 start.go:303] post-start completed in 192.262029ms
	I1212 00:19:39.720120 1166783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:19:39.720158 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.738794 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.833772 1166783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:19:39.841224 1166783 fix.go:56] fixHost completed within 1.486270012s
	I1212 00:19:39.841239 1166783 start.go:83] releasing machines lock for "functional-204186", held for 1.486310422s
	I1212 00:19:39.841305 1166783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-204186
	I1212 00:19:39.862046 1166783 ssh_runner.go:195] Run: cat /version.json
	I1212 00:19:39.862101 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.862350 1166783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:19:39.862409 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.888120 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.889925 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.988243 1166783 ssh_runner.go:195] Run: systemctl --version
	I1212 00:19:40.123119 1166783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:19:40.130619 1166783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1212 00:19:40.156770 1166783 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:19:40.156843 1166783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:19:40.168259 1166783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:19:40.168274 1166783 start.go:475] detecting cgroup driver to use...
	I1212 00:19:40.168327 1166783 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:19:40.168376 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:19:40.185332 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:19:40.200998 1166783 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:19:40.201062 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:19:40.219091 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:19:40.236047 1166783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:19:40.377539 1166783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:19:40.507741 1166783 docker.go:219] disabling docker service ...
	I1212 00:19:40.507815 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:19:40.525312 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:19:40.541366 1166783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:19:40.671172 1166783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:19:40.800459 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:19:40.815340 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:19:40.836927 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:19:40.851130 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:19:40.864529 1166783 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:19:40.864600 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:19:40.880806 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:19:40.895794 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:19:40.909131 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:19:40.922165 1166783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:19:40.933419 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:19:40.946768 1166783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:19:40.957556 1166783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:19:40.968029 1166783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:19:41.083587 1166783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:19:41.323366 1166783 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:19:41.323436 1166783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:19:41.331217 1166783 start.go:543] Will wait 60s for crictl version
	I1212 00:19:41.331273 1166783 ssh_runner.go:195] Run: which crictl
	I1212 00:19:41.339039 1166783 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:19:41.383778 1166783 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1212 00:19:41.383846 1166783 ssh_runner.go:195] Run: containerd --version
	I1212 00:19:41.416098 1166783 ssh_runner.go:195] Run: containerd --version
	I1212 00:19:41.448227 1166783 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I1212 00:19:41.450208 1166783 cli_runner.go:164] Run: docker network inspect functional-204186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:19:41.467903 1166783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:19:41.474722 1166783 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:19:41.476837 1166783 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:19:41.476929 1166783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:19:41.521062 1166783 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:19:41.521076 1166783 containerd.go:518] Images already preloaded, skipping extraction
	I1212 00:19:41.521129 1166783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:19:41.563890 1166783 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:19:41.563902 1166783 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:19:41.563972 1166783 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:19:41.605056 1166783 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:19:41.605079 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:41.605088 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:41.605097 1166783 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:19:41.605114 1166783 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-204186 NodeName:functional-204186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:19:41.605243 1166783 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-204186"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:19:41.605308 1166783 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-204186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1212 00:19:41.605374 1166783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:19:41.618178 1166783 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:19:41.618260 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:19:41.629314 1166783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I1212 00:19:41.652600 1166783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:19:41.674858 1166783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I1212 00:19:41.697192 1166783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:19:41.701898 1166783 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186 for IP: 192.168.49.2
	I1212 00:19:41.701928 1166783 certs.go:190] acquiring lock for shared ca certs: {Name:mk518d45f153d561b6d30fa5c8435abd4f573517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:41.702088 1166783 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key
	I1212 00:19:41.702139 1166783 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key
	I1212 00:19:41.702240 1166783 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.key
	I1212 00:19:41.702288 1166783 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.key.dd3b5fb2
	I1212 00:19:41.702322 1166783 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.key
	I1212 00:19:41.702433 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem (1338 bytes)
	W1212 00:19:41.702458 1166783 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281_empty.pem, impossibly tiny 0 bytes
	I1212 00:19:41.702465 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:19:41.702492 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:19:41.702516 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:19:41.702537 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem (1675 bytes)
	I1212 00:19:41.702582 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:19:41.703256 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:19:41.733829 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:19:41.764143 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:19:41.793194 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:19:41.822531 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:19:41.858002 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:19:41.895051 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:19:41.926100 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:19:41.955773 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:19:41.985536 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem --> /usr/share/ca-certificates/1141281.pem (1338 bytes)
	I1212 00:19:42.023297 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /usr/share/ca-certificates/11412812.pem (1708 bytes)
	I1212 00:19:42.056302 1166783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:19:42.081918 1166783 ssh_runner.go:195] Run: openssl version
	I1212 00:19:42.093411 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1141281.pem && ln -fs /usr/share/ca-certificates/1141281.pem /etc/ssl/certs/1141281.pem"
	I1212 00:19:42.109628 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.116307 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:18 /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.116422 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.138269 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1141281.pem /etc/ssl/certs/51391683.0"
	I1212 00:19:42.154200 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11412812.pem && ln -fs /usr/share/ca-certificates/11412812.pem /etc/ssl/certs/11412812.pem"
	I1212 00:19:42.169858 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.176203 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:18 /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.176290 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.189156 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11412812.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:19:42.205051 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:19:42.222308 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.228709 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.228802 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.241802 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:19:42.256130 1166783 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:19:42.262370 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:19:42.272839 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:19:42.283158 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:19:42.292851 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:19:42.302374 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:19:42.312206 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:19:42.322552 1166783 kubeadm.go:404] StartCluster: {Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:42.322645 1166783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:19:42.322716 1166783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:19:42.378783 1166783 cri.go:89] found id: "4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783"
	I1212 00:19:42.378798 1166783 cri.go:89] found id: "9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d"
	I1212 00:19:42.378802 1166783 cri.go:89] found id: "4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743"
	I1212 00:19:42.378807 1166783 cri.go:89] found id: "b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e"
	I1212 00:19:42.378810 1166783 cri.go:89] found id: "7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	I1212 00:19:42.378816 1166783 cri.go:89] found id: "f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8"
	I1212 00:19:42.378820 1166783 cri.go:89] found id: "360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b"
	I1212 00:19:42.378823 1166783 cri.go:89] found id: "fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f"
	I1212 00:19:42.378827 1166783 cri.go:89] found id: "8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	I1212 00:19:42.378841 1166783 cri.go:89] found id: ""
	I1212 00:19:42.378903 1166783 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1212 00:19:42.413648 1166783 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b","pid":1280,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b/rootfs","created":"2023-12-12T00:18:43.008599856Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7f532c4a9c9f164eeeacdb7ee8b121ca"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783","pid":2831,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783/rootfs","created":"2023-12-12T00:19:35.214833678Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f4424a2e-f114-46c8-9059-3ddd8cab9386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743","pid":1892,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743/rootfs","created":"2023-12-12T00:19:04.817909252Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","io.kubernetes.cri.sandbox-name":"kindnet-p7qfc","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80d814ed-cb37-4243-97a7-61169cbf7ae7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","pid":1791,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5695af01bb75b84961173de189b46ab680
eebd75505d2f089d6304cab37f944e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e/rootfs","created":"2023-12-12T00:19:04.517306189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-p7qfc_80d814ed-cb37-4243-97a7-61169cbf7ae7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-p7qfc","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80d814ed-cb37-4243-97a7-61169cbf7ae7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","pid":1153,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672/rootfs","created":"2023-12-12T00:18:42.787933141Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-204186_7f532c4a9c9f164eeeacdb7ee8b121ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7f532c4a9c9f164eeeacdb7ee8b121ca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8204ec74c6c75a7ce2f3c9c385
56fda8152667cef4c5fd6f8c1c0281cb1b67e1","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1/rootfs","created":"2023-12-12T00:18:42.941005004Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bb521992826aaef3b829c57d52661ef"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","pid":1676,"status":"running","bundle":"/run/containerd/io.c
ontainerd.runtime.v2.task/k8s.io/91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b/rootfs","created":"2023-12-12T00:19:04.001976537Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f4424a2e-f114-46c8-9059-3ddd8cab9386","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f4424a2e-f114-46c8-9059-3ddd8cab9386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501
af","pid":1144,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af/rootfs","created":"2023-12-12T00:18:42.767256925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-204186_0bb521992826aaef3b829c57d52661ef","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bb521992826aaef3b829c57d52661ef"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","pid":1186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5/rootfs","created":"2023-12-12T00:18:42.825889148Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-204186_d3927b2e4e82a4e18057da3723e43cc0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubern
etes.cri.sandbox-uid":"d3927b2e4e82a4e18057da3723e43cc0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d","pid":2110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d/rootfs","created":"2023-12-12T00:19:19.034742843Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-75jb5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88486eba-5928-4a3b-b0e2-82572161ba5b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a33d
05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","pid":1752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa/rootfs","created":"2023-12-12T00:19:04.385714277Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xn2hr_17a4a16d-a0cd-45c8-bd8c-da9736f87535","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xn2hr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"17a4a16d-a0cd-45c8-bd8c-da9736f87
535"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","pid":1194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a/rootfs","created":"2023-12-12T00:18:42.861461679Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-204186_fe1cfa1135867fcf7ae120ad770b3e34","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system
","io.kubernetes.cri.sandbox-uid":"fe1cfa1135867fcf7ae120ad770b3e34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e","pid":1817,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e/rootfs","created":"2023-12-12T00:19:04.478952425Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","io.kubernetes.cri.sandbox-name":"kube-proxy-xn2hr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"17a4a16d-a0cd-45c8-bd8c-da9736f87535"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7
ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","pid":2076,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749/rootfs","created":"2023-12-12T00:19:18.939862893Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-75jb5_88486eba-5928-4a3b-b0e2-82572161ba5b","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-75jb5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88486
eba-5928-4a3b-b0e2-82572161ba5b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8","pid":1332,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8/rootfs","created":"2023-12-12T00:18:43.161380898Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","io.kubernetes.cri.sandbox-name":"etcd-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fe1cfa1135867fcf7ae120ad770b3e34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc16
3530fa8b5b88342f","pid":1326,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f/rootfs","created":"2023-12-12T00:18:43.150611388Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d3927b2e4e82a4e18057da3723e43cc0"},"owner":"root"}]
	I1212 00:19:42.413997 1166783 cri.go:126] list returned 16 containers
	I1212 00:19:42.414006 1166783 cri.go:129] container: {ID:360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b Status:running}
	I1212 00:19:42.414019 1166783 cri.go:135] skipping {360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b running}: state = "running", want "paused"
	I1212 00:19:42.414028 1166783 cri.go:129] container: {ID:4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 Status:running}
	I1212 00:19:42.414035 1166783 cri.go:135] skipping {4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 running}: state = "running", want "paused"
	I1212 00:19:42.414041 1166783 cri.go:129] container: {ID:4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 Status:running}
	I1212 00:19:42.414046 1166783 cri.go:135] skipping {4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 running}: state = "running", want "paused"
	I1212 00:19:42.414052 1166783 cri.go:129] container: {ID:5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e Status:running}
	I1212 00:19:42.414058 1166783 cri.go:131] skipping 5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e - not in ps
	I1212 00:19:42.414062 1166783 cri.go:129] container: {ID:74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672 Status:running}
	I1212 00:19:42.414068 1166783 cri.go:131] skipping 74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672 - not in ps
	I1212 00:19:42.414073 1166783 cri.go:129] container: {ID:8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 Status:running}
	I1212 00:19:42.414078 1166783 cri.go:135] skipping {8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 running}: state = "running", want "paused"
	I1212 00:19:42.414083 1166783 cri.go:129] container: {ID:91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b Status:running}
	I1212 00:19:42.414089 1166783 cri.go:131] skipping 91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b - not in ps
	I1212 00:19:42.414093 1166783 cri.go:129] container: {ID:929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af Status:running}
	I1212 00:19:42.414099 1166783 cri.go:131] skipping 929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af - not in ps
	I1212 00:19:42.414103 1166783 cri.go:129] container: {ID:9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5 Status:running}
	I1212 00:19:42.414111 1166783 cri.go:131] skipping 9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5 - not in ps
	I1212 00:19:42.414116 1166783 cri.go:129] container: {ID:9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d Status:running}
	I1212 00:19:42.414121 1166783 cri.go:135] skipping {9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d running}: state = "running", want "paused"
	I1212 00:19:42.414126 1166783 cri.go:129] container: {ID:a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa Status:running}
	I1212 00:19:42.414134 1166783 cri.go:131] skipping a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa - not in ps
	I1212 00:19:42.414138 1166783 cri.go:129] container: {ID:a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a Status:running}
	I1212 00:19:42.414144 1166783 cri.go:131] skipping a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a - not in ps
	I1212 00:19:42.414148 1166783 cri.go:129] container: {ID:b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e Status:running}
	I1212 00:19:42.414154 1166783 cri.go:135] skipping {b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e running}: state = "running", want "paused"
	I1212 00:19:42.414159 1166783 cri.go:129] container: {ID:e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749 Status:running}
	I1212 00:19:42.414165 1166783 cri.go:131] skipping e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749 - not in ps
	I1212 00:19:42.414169 1166783 cri.go:129] container: {ID:f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 Status:running}
	I1212 00:19:42.414175 1166783 cri.go:135] skipping {f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 running}: state = "running", want "paused"
	I1212 00:19:42.414180 1166783 cri.go:129] container: {ID:fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f Status:running}
	I1212 00:19:42.414185 1166783 cri.go:135] skipping {fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f running}: state = "running", want "paused"
	I1212 00:19:42.414238 1166783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:19:42.426395 1166783 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 00:19:42.426407 1166783 kubeadm.go:636] restartCluster start
	I1212 00:19:42.426463 1166783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:19:42.437760 1166783 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:42.438356 1166783 kubeconfig.go:92] found "functional-204186" server: "https://192.168.49.2:8441"
	I1212 00:19:42.440196 1166783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:19:42.451884 1166783 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-12-12 00:18:34.640409327 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-12-12 00:19:41.687950639 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1212 00:19:42.451896 1166783 kubeadm.go:1135] stopping kube-system containers ...
	I1212 00:19:42.451907 1166783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:19:42.451963 1166783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:19:42.496926 1166783 cri.go:89] found id: "4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783"
	I1212 00:19:42.496941 1166783 cri.go:89] found id: "9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d"
	I1212 00:19:42.496946 1166783 cri.go:89] found id: "4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743"
	I1212 00:19:42.496949 1166783 cri.go:89] found id: "b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e"
	I1212 00:19:42.496952 1166783 cri.go:89] found id: "7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	I1212 00:19:42.496956 1166783 cri.go:89] found id: "f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8"
	I1212 00:19:42.496962 1166783 cri.go:89] found id: "360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b"
	I1212 00:19:42.496966 1166783 cri.go:89] found id: "fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f"
	I1212 00:19:42.496969 1166783 cri.go:89] found id: "8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	I1212 00:19:42.496979 1166783 cri.go:89] found id: ""
	I1212 00:19:42.496984 1166783 cri.go:234] Stopping containers: [4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1]
	I1212 00:19:42.497038 1166783 ssh_runner.go:195] Run: which crictl
	I1212 00:19:42.501723 1166783 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1
	I1212 00:19:47.777226 1166783 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1: (5.275464165s)
	W1212 00:19:47.777279 1166783 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1: Process exited with status 1
	stdout:
	4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783
	9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d
	4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743
	b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e
	
	stderr:
	E1212 00:19:47.774088    3356 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": not found" containerID="7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	time="2023-12-12T00:19:47Z" level=fatal msg="stopping the container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": not found"
	I1212 00:19:47.777340 1166783 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:19:47.851952 1166783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:19:47.862994 1166783 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Dec 12 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 12 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Dec 12 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 12 00:18 /etc/kubernetes/scheduler.conf
	
	I1212 00:19:47.863057 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:19:47.874186 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:19:47.886147 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:19:47.897962 1166783 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:47.898020 1166783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:19:47.908984 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:19:47.920492 1166783 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:47.920556 1166783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:19:47.931644 1166783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:19:47.942812 1166783 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 00:19:47.942838 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:48.021789 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.323474 1166783 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.301659419s)
	I1212 00:19:50.323493 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.533360 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.615860 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.703568 1166783 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:19:50.703633 1166783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:19:50.729933 1166783 api_server.go:72] duration metric: took 26.36494ms to wait for apiserver process to appear ...
	I1212 00:19:50.729948 1166783 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:19:50.729964 1166783 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:19:50.741750 1166783 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1212 00:19:50.757498 1166783 api_server.go:141] control plane version: v1.28.4
	I1212 00:19:50.757517 1166783 api_server.go:131] duration metric: took 27.563594ms to wait for apiserver health ...
	I1212 00:19:50.757525 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:50.757531 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:50.760139 1166783 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:19:50.762051 1166783 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:19:50.769174 1166783 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:19:50.769199 1166783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:19:50.799044 1166783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:19:51.250997 1166783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:19:51.259526 1166783 system_pods.go:59] 8 kube-system pods found
	I1212 00:19:51.259544 1166783 system_pods.go:61] "coredns-5dd5756b68-75jb5" [88486eba-5928-4a3b-b0e2-82572161ba5b] Running
	I1212 00:19:51.259548 1166783 system_pods.go:61] "etcd-functional-204186" [22eaa66d-9573-4688-a676-a624f562a069] Running
	I1212 00:19:51.259552 1166783 system_pods.go:61] "kindnet-p7qfc" [80d814ed-cb37-4243-97a7-61169cbf7ae7] Running
	I1212 00:19:51.259556 1166783 system_pods.go:61] "kube-apiserver-functional-204186" [69dc4cd3-92c5-4f67-813d-c38849073058] Running
	I1212 00:19:51.259561 1166783 system_pods.go:61] "kube-controller-manager-functional-204186" [d4482b26-8308-4a6f-8efe-dd15c7689236] Running
	I1212 00:19:51.259568 1166783 system_pods.go:61] "kube-proxy-xn2hr" [17a4a16d-a0cd-45c8-bd8c-da9736f87535] Running
	I1212 00:19:51.259572 1166783 system_pods.go:61] "kube-scheduler-functional-204186" [9b16cc61-09fb-4f6c-af03-029249e6bf3d] Running
	I1212 00:19:51.259579 1166783 system_pods.go:61] "storage-provisioner" [f4424a2e-f114-46c8-9059-3ddd8cab9386] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:19:51.259592 1166783 system_pods.go:74] duration metric: took 8.577845ms to wait for pod list to return data ...
	I1212 00:19:51.259600 1166783 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:19:51.262989 1166783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:19:51.263007 1166783 node_conditions.go:123] node cpu capacity is 2
	I1212 00:19:51.263019 1166783 node_conditions.go:105] duration metric: took 3.412063ms to run NodePressure ...
	I1212 00:19:51.263034 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:51.488154 1166783 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 00:19:51.492965 1166783 retry.go:31] will retry after 311.086978ms: kubelet not initialised
	I1212 00:19:51.856393 1166783 retry.go:31] will retry after 290.962584ms: kubelet not initialised
	I1212 00:19:52.153722 1166783 kubeadm.go:787] kubelet initialised
	I1212 00:19:52.153732 1166783 kubeadm.go:788] duration metric: took 665.564362ms waiting for restarted kubelet to initialise ...
	I1212 00:19:52.153739 1166783 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:19:52.164434 1166783 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.194792 1166783 pod_ready.go:97] error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.194808 1166783 pod_ready.go:81] duration metric: took 1.030359999s waiting for pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.194819 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.194842 1166783 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195194 1166783 pod_ready.go:97] error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195209 1166783 pod_ready.go:81] duration metric: took 359.62µs waiting for pod "etcd-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195218 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195237 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195488 1166783 pod_ready.go:97] error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195496 1166783 pod_ready.go:81] duration metric: took 253.176µs waiting for pod "kube-apiserver-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195504 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195523 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195772 1166783 pod_ready.go:97] error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195781 1166783 pod_ready.go:81] duration metric: took 252.076µs waiting for pod "kube-controller-manager-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195789 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195811 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xn2hr" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.196092 1166783 pod_ready.go:97] error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196100 1166783 pod_ready.go:81] duration metric: took 283.485µs waiting for pod "kube-proxy-xn2hr" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.196108 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196128 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.196388 1166783 pod_ready.go:97] error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196397 1166783 pod_ready.go:81] duration metric: took 241.82µs waiting for pod "kube-scheduler-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.196405 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196422 1166783 pod_ready.go:38] duration metric: took 1.042674859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:19:53.196436 1166783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W1212 00:19:53.205857 1166783 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I1212 00:19:53.205869 1166783 kubeadm.go:640] restartCluster took 10.779458014s
	I1212 00:19:53.205876 1166783 kubeadm.go:406] StartCluster complete in 10.883351408s
	I1212 00:19:53.205889 1166783 settings.go:142] acquiring lock: {Name:mk888158b3cbabbb2583b6a6f74ff62a9621d5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:53.205956 1166783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:19:53.206588 1166783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/kubeconfig: {Name:mkea8ea25a391ae5db2568a02e638c76b0d6995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:53.206816 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:19:53.207101 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:53.207263 1166783 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:19:53.207360 1166783 addons.go:69] Setting storage-provisioner=true in profile "functional-204186"
	I1212 00:19:53.207373 1166783 addons.go:231] Setting addon storage-provisioner=true in "functional-204186"
	W1212 00:19:53.207379 1166783 addons.go:240] addon storage-provisioner should already be in state true
	I1212 00:19:53.207445 1166783 host.go:66] Checking if "functional-204186" exists ...
	I1212 00:19:53.207821 1166783 addons.go:69] Setting default-storageclass=true in profile "functional-204186"
	I1212 00:19:53.207836 1166783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-204186"
	I1212 00:19:53.207860 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:53.208104 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	W1212 00:19:53.208431 1166783 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-204186" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.208443 1166783 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.208513 1166783 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:19:53.213129 1166783 out.go:177] * Verifying Kubernetes components...
	I1212 00:19:53.218222 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:19:53.249929 1166783 addons.go:231] Setting addon default-storageclass=true in "functional-204186"
	W1212 00:19:53.249941 1166783 addons.go:240] addon default-storageclass should already be in state true
	I1212 00:19:53.249963 1166783 host.go:66] Checking if "functional-204186" exists ...
	I1212 00:19:53.250432 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:53.268837 1166783 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:19:53.270964 1166783 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:19:53.270978 1166783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:19:53.271046 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:53.291691 1166783 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:19:53.291703 1166783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:19:53.291762 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:53.316806 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	E1212 00:19:53.334327 1166783 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1212 00:19:53.334349 1166783 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1212 00:19:53.334363 1166783 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1212 00:19:53.334502 1166783 node_ready.go:35] waiting up to 6m0s for node "functional-204186" to be "Ready" ...
	I1212 00:19:53.334838 1166783 node_ready.go:53] error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.334851 1166783 node_ready.go:38] duration metric: took 337.762µs waiting for node "functional-204186" to be "Ready" ...
	I1212 00:19:53.338808 1166783 out.go:177] 
	W1212 00:19:53.341024 1166783 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:53.341047 1166783 out.go:239] * 
	W1212 00:19:53.342114 1166783 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:19:53.345121 1166783 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0abe98be43702       97e04611ad434       4 seconds ago        Running             coredns                   1                   e7ee9926b7666       coredns-5dd5756b68-75jb5
	35d39f988885b       ba04bb24b9575       4 seconds ago        Running             storage-provisioner       2                   91c1e8e748144       storage-provisioner
	0be1a7ad1cf29       3ca3ca488cf13       4 seconds ago        Running             kube-proxy                1                   a33d05ee8738d       kube-proxy-xn2hr
	11a13b0a859b6       04b4eaa3d3db8       4 seconds ago        Running             kindnet-cni               1                   5695af01bb75b       kindnet-p7qfc
	ec98dd60a37e1       04b4c447bb9d4       4 seconds ago        Exited              kube-apiserver            1                   f426cbf93d3b8       kube-apiserver-functional-204186
	4b48b0124d6a9       ba04bb24b9575       21 seconds ago       Exited              storage-provisioner       1                   91c1e8e748144       storage-provisioner
	9be15f3092c17       97e04611ad434       37 seconds ago       Exited              coredns                   0                   e7ee9926b7666       coredns-5dd5756b68-75jb5
	4c3518b0312ab       04b4eaa3d3db8       52 seconds ago       Exited              kindnet-cni               0                   5695af01bb75b       kindnet-p7qfc
	b10a99a14fe0a       3ca3ca488cf13       52 seconds ago       Exited              kube-proxy                0                   a33d05ee8738d       kube-proxy-xn2hr
	f1bf1c4332d38       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   a67f2d7f88b70       etcd-functional-204186
	360b7493b53a0       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   74d87972b5980       kube-controller-manager-functional-204186
	fdcc7d847a538       05c284c929889       About a minute ago   Running             kube-scheduler            0                   9832a28d5e6be       kube-scheduler-functional-204186
	
	* 
	* ==> containerd <==
	* Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.357740880Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.378507754Z" level=info msg="StartContainer for \"0abe98be437028e07d1455f24fbb28b5834240a4e81135bf4c59ed7090a55ce6\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.394345783Z" level=info msg="StartContainer for \"11a13b0a859b60db60e77078ff0a9c0fd3bf66498e48a7c2d9e4bb2e192725e2\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.415127565Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3873 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.418758457Z" level=info msg="StartContainer for \"0be1a7ad1cf29b232d9b2f17256ac8b257d11a9c4160fe02383d37cae2b804bc\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.899780450Z" level=info msg="StopContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" with timeout 2 (s)"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.900933615Z" level=info msg="Stop container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" with signal terminated"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957937189Z" level=info msg="shim disconnected" id=929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957985557Z" level=warning msg="cleaning up after shim disconnected" id=929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af namespace=k8s.io
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957995765Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.979783726Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4061 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.984347576Z" level=info msg="RemoveContainer for \"5b2014e0c953df23c937e404c551a12c7f253506c7e09be700cf94c74ebf812f\""
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.989771654Z" level=info msg="RemoveContainer for \"5b2014e0c953df23c937e404c551a12c7f253506c7e09be700cf94c74ebf812f\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012811357Z" level=info msg="shim disconnected" id=8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012885572Z" level=warning msg="cleaning up after shim disconnected" id=8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 namespace=k8s.io
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012899808Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.024366134Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4086 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.027684749Z" level=info msg="StopContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028454754Z" level=info msg="StopPodSandbox for \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\""
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028552870Z" level=info msg="Container to stop \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028768507Z" level=info msg="TearDown network for sandbox \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\" successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028788060Z" level=info msg="StopPodSandbox for \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.994452895Z" level=info msg="RemoveContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\""
	Dec 12 00:19:54 functional-204186 containerd[3161]: time="2023-12-12T00:19:54.006865232Z" level=info msg="RemoveContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" returns successfully"
	Dec 12 00:19:54 functional-204186 containerd[3161]: time="2023-12-12T00:19:54.010093116Z" level=error msg="ContainerStatus for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": not found"
	
	* 
	* ==> coredns [0abe98be437028e07d1455f24fbb28b5834240a4e81135bf4c59ed7090a55ce6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47675 - 44376 "HINFO IN 8293000808179795944.5557927990981517027. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012825832s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43954 - 51804 "HINFO IN 7258703346742299720.7667376483180762445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013624227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001103] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001114] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +0.004970] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=000000001bb038f1
	[  +0.001195] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000760] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000ce236adb
	[  +0.001178] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +3.628923] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001149] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000c19aa351
	[  +0.001199] FS-Cache: O-key=[8] '4f3e5c0100000000'
	[  +0.000795] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001065] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001178] FS-Cache: N-key=[8] '4f3e5c0100000000'
	[  +0.413575] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000d9ff942f
	[  +0.001137] FS-Cache: O-key=[8] '553e5c0100000000'
	[  +0.000730] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001098] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=000000008357462d
	[  +0.001241] FS-Cache: N-key=[8] '553e5c0100000000'
	
	* 
	* ==> etcd [f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8] <==
	* {"level":"info","ts":"2023-12-12T00:18:43.480306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-12T00:18:43.487446Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-12T00:18:43.489297Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T00:18:43.489605Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:18:43.495477Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:18:43.496288Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T00:18:43.496421Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:18:44.017135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.019999Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-204186 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:18:44.020176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:18:44.025361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-12T00:18:44.025687Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.03183Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:18:44.033017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:18:44.043361Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:18:44.075781Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:19:57 up  7:02,  0 users,  load average: 1.58, 1.26, 0.74
	Linux functional-204186 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [11a13b0a859b60db60e77078ff0a9c0fd3bf66498e48a7c2d9e4bb2e192725e2] <==
	* I1212 00:19:52.432050       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:19:52.432116       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:19:52.432252       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:19:52.432267       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:19:52.432281       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:19:52.821571       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:52.821602       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743] <==
	* I1212 00:19:04.921582       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:19:04.921651       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:19:04.921815       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:19:04.921830       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:19:04.921862       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:19:05.417437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:05.417644       1 main.go:227] handling current node
	I1212 00:19:15.430336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:15.430366       1 main.go:227] handling current node
	I1212 00:19:25.443764       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:25.443792       1 main.go:227] handling current node
	I1212 00:19:35.447984       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:35.448411       1 main.go:227] handling current node
	I1212 00:19:45.457404       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:45.457445       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2] <==
	* I1212 00:19:52.287856       1 options.go:220] external host was not specified, using 192.168.49.2
	I1212 00:19:52.289191       1 server.go:148] Version: v1.28.4
	I1212 00:19:52.289333       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1212 00:19:52.295465       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b] <==
	* I1212 00:19:01.913164       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:19:02.207880       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xn2hr"
	I1212 00:19:02.236055       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p7qfc"
	I1212 00:19:02.340642       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:19:02.370173       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:19:02.370208       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:19:02.402848       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 00:19:02.492956       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 00:19:02.782766       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zssw5"
	I1212 00:19:02.793880       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-75jb5"
	I1212 00:19:02.839458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="437.553609ms"
	I1212 00:19:02.859056       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-zssw5"
	I1212 00:19:02.870835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.305148ms"
	I1212 00:19:02.890752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.754212ms"
	I1212 00:19:02.911932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.123482ms"
	I1212 00:19:02.912115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.549µs"
	I1212 00:19:04.058663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.954µs"
	I1212 00:19:04.066980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.04µs"
	I1212 00:19:04.074159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.197µs"
	I1212 00:19:19.096666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.778µs"
	I1212 00:19:20.108699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.224824ms"
	I1212 00:19:20.110382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.409µs"
	I1212 00:19:20.110809       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1212 00:19:51.900629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.264291ms"
	I1212 00:19:51.900757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.154µs"
	
	* 
	* ==> kube-proxy [0be1a7ad1cf29b232d9b2f17256ac8b257d11a9c4160fe02383d37cae2b804bc] <==
	* I1212 00:19:52.508466       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:19:52.509572       1 config.go:188] "Starting service config controller"
	I1212 00:19:52.509727       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:19:52.509823       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:19:52.509839       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:19:52.510562       1 config.go:315] "Starting node config controller"
	I1212 00:19:52.510576       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:19:52.610589       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:19:52.610596       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:19:52.610644       1 shared_informer.go:318] Caches are synced for node config
	W1212 00:19:52.978692       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:52.978734       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:52.978764       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:53.789245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.789317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:54.120973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:54.121036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:54.515895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:54.515945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:56.332508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:56.332565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:56.582357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:56.582417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:56.637977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:56.638031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-proxy [b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e] <==
	* I1212 00:19:04.559125       1 server_others.go:69] "Using iptables proxy"
	I1212 00:19:04.576426       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:19:04.599481       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:19:04.601715       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:19:04.601899       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:19:04.601984       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:19:04.602161       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:19:04.602462       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:19:04.602795       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:19:04.604008       1 config.go:188] "Starting service config controller"
	I1212 00:19:04.604321       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:19:04.604499       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:19:04.604575       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:19:04.606449       1 config.go:315] "Starting node config controller"
	I1212 00:19:04.606600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:19:04.705005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:19:04.705039       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:19:04.706747       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f] <==
	* W1212 00:18:46.916161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:18:46.916178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 00:18:46.916225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:18:46.916240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:18:46.916287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:18:46.916302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:18:46.916358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:46.916373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:46.916558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:18:46.916578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:18:47.752991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:47.753339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:47.752996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:18:47.753590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:18:47.793668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:18:47.793878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:18:47.884125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:18:47.884158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 00:18:47.908838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:18:47.909070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:18:47.984156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:47.984297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:48.049219       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:18:48.049625       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 00:18:50.203649       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.010520    3542 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"} err="failed to get container status \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": not found"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.151566    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.152082    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.152486    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.152879    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.153267    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.153670    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:54 functional-204186 kubelet[3542]: I1212 00:19:54.893882    3542 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0bb521992826aaef3b829c57d52661ef" path="/var/lib/kubelet/pods/0bb521992826aaef3b829c57d52661ef/volumes"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:54.999969    3542 scope.go:117] "RemoveContainer" containerID="ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: E1212 00:19:55.000582    3542 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-204186_kube-system(102fdf4414c3e8f4b2b76c9e617d21ca)\"" pod="kube-system/kube-apiserver-functional-204186" podUID="102fdf4414c3e8f4b2b76c9e617d21ca"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.000753    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001101    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001318    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001515    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001680    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:55 functional-204186 kubelet[3542]: I1212 00:19:55.001855    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:56 functional-204186 kubelet[3542]: I1212 00:19:56.001602    3542 scope.go:117] "RemoveContainer" containerID="ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2"
	Dec 12 00:19:56 functional-204186 kubelet[3542]: E1212 00:19:56.002229    3542 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-204186_kube-system(102fdf4414c3e8f4b2b76c9e617d21ca)\"" pod="kube-system/kube-apiserver-functional-204186" podUID="102fdf4414c3e8f4b2b76c9e617d21ca"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.497466    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.497799    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.498072    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.498368    3542 status_manager.go:853] "Failed to get status for pod" podUID="7f532c4a9c9f164eeeacdb7ee8b121ca" pod="kube-system/kube-controller-manager-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.498638    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.498910    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:19:57 functional-204186 kubelet[3542]: I1212 00:19:57.499208    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	* 
	* ==> storage-provisioner [35d39f988885b4b77d9b8fd4e6fd28e8cd51d3db66cdc048ce6c8b9a7ab9d5d3] <==
	* I1212 00:19:52.288774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:19:52.305023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:19:52.305089       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1212 00:19:55.759801       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783] <==
	* I1212 00:19:35.251844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:19:35.267237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:19:35.267809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:19:35.292941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:19:35.293350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59!
	I1212 00:19:35.293575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dce0da4-55da-4699-8143-5da0ecbb7ad6", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59 became leader
	I1212 00:19:35.394357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:19:57.194473 1168527 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-204186 -n functional-204186
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-204186 -n functional-204186: exit status 2 (360.620765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-204186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (2.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 logs --file /tmp/TestFunctionalserialLogsFileCmd500223927/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 logs --file /tmp/TestFunctionalserialLogsFileCmd500223927/001/logs.txt: (2.024242857s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:20:01.147152 1169007 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (2.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-204186 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-204186 apply -f testdata/invalidsvc.yaml: exit status 1 (80.226167ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-204186 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-204186 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-204186 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (70.507654ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-204186 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-204186
helpers_test.go:235: (dbg) docker inspect functional-204186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31",
	        "Created": "2023-12-12T00:18:25.221497989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1163078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:18:25.55807554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/hosts",
	        "LogPath": "/var/lib/docker/containers/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31/7cfe39aaf2d8b0d3f41cf9f73ea36d635a3f59968bb6bb4adbec9df879bf2d31-json.log",
	        "Name": "/functional-204186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-204186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-204186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281-init/diff:/var/lib/docker/overlay2/83f94b9f515065f4cf4d4337d1fbe3fc13b585131a89a52ad8eb2b6bf7d119ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2ae7d7ec11d5cfe6ea1717f36d6c356dbc4449d929d6e95898a8cc6962b281/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-204186",
	                "Source": "/var/lib/docker/volumes/functional-204186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-204186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-204186",
	                "name.minikube.sigs.k8s.io": "functional-204186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f06969edef670514b05008e5de9ef1c1b17b7cfbdaf03c893731542632a1c35",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34042"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34039"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34041"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34040"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1f06969edef6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-204186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7cfe39aaf2d8",
	                        "functional-204186"
	                    ],
	                    "NetworkID": "6ba4ac6be618f8f1444cda50bb12d14c77e16c004975f4866f6cf01acb655fe8",
	                    "EndpointID": "4e500a345b2e632c078524e976c542a16025a1d15c3a51f19fb2c9cb3755c9b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-204186 -n functional-204186
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-204186 -n functional-204186: exit status 2 (388.953519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 logs -n 25: (1.816521488s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cache   | functional-204186 cache delete                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | minikube-local-cache-test:functional-204186                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	| ssh     | functional-204186 ssh sudo                                               | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-204186                                                        | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh                                                    | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-204186 cache reload                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	| ssh     | functional-204186 ssh                                                    | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-204186 kubectl --                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC | 12 Dec 23 00:19 UTC |
	|         | --context functional-204186                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-204186                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:19 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| cp      | functional-204186 cp                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-204186 config unset                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-204186 config get                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-204186 config set                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| config  | functional-204186 config get                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh -n                                                 | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | functional-204186 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-204186 config unset                                           | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-204186 config get                                             | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| license |                                                                          | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	| cp      | functional-204186 cp                                                     | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | functional-204186:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd87923566/001/cp-test.txt                 |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh sudo                                               | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC |                     |
	|         | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh -n                                                 | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC | 12 Dec 23 00:20 UTC |
	|         | functional-204186 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| ssh     | functional-204186 ssh sudo                                               | functional-204186 | jenkins | v1.32.0 | 12 Dec 23 00:20 UTC |                     |
	|         | systemctl is-active crio                                                 |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:19:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:19:38.104221 1166783 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:19:38.104406 1166783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:19:38.104410 1166783 out.go:309] Setting ErrFile to fd 2...
	I1212 00:19:38.104415 1166783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:19:38.104683 1166783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:19:38.105092 1166783 out.go:303] Setting JSON to false
	I1212 00:19:38.106053 1166783 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25325,"bootTime":1702315053,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:19:38.106118 1166783 start.go:138] virtualization:  
	I1212 00:19:38.108824 1166783 out.go:177] * [functional-204186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:19:38.111872 1166783 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:19:38.114135 1166783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:19:38.112023 1166783 notify.go:220] Checking for updates...
	I1212 00:19:38.117202 1166783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:19:38.119664 1166783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:19:38.122229 1166783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:19:38.124644 1166783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:19:38.127615 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:38.127742 1166783 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:19:38.155131 1166783 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:19:38.155239 1166783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:19:38.234235 1166783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:19:38.224036211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:19:38.234326 1166783 docker.go:295] overlay module found
	I1212 00:19:38.236723 1166783 out.go:177] * Using the docker driver based on existing profile
	I1212 00:19:38.239483 1166783 start.go:298] selected driver: docker
	I1212 00:19:38.239491 1166783 start.go:902] validating driver "docker" against &{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:38.239572 1166783 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:19:38.239692 1166783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:19:38.328217 1166783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:19:38.318818701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:19:38.328604 1166783 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:19:38.328648 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:38.328655 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:38.328667 1166783 start_flags.go:323] config:
	{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:38.331267 1166783 out.go:177] * Starting control plane node functional-204186 in cluster functional-204186
	I1212 00:19:38.333309 1166783 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:19:38.335076 1166783 out.go:177] * Pulling base image ...
	I1212 00:19:38.336829 1166783 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:19:38.336887 1166783 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1212 00:19:38.336894 1166783 cache.go:56] Caching tarball of preloaded images
	I1212 00:19:38.336924 1166783 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:19:38.336994 1166783 preload.go:174] Found /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:19:38.337003 1166783 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1212 00:19:38.337114 1166783 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/config.json ...
	I1212 00:19:38.354767 1166783 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:19:38.354782 1166783 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:19:38.354805 1166783 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:19:38.354850 1166783 start.go:365] acquiring machines lock for functional-204186: {Name:mk52ac4d0a7302cc0a39b0bd3e6a9baa9621f9b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:19:38.354922 1166783 start.go:369] acquired machines lock for "functional-204186" in 52.545µs
	I1212 00:19:38.354940 1166783 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:19:38.354946 1166783 fix.go:54] fixHost starting: 
	I1212 00:19:38.355276 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:38.374303 1166783 fix.go:102] recreateIfNeeded on functional-204186: state=Running err=<nil>
	W1212 00:19:38.374326 1166783 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:19:38.377049 1166783 out.go:177] * Updating the running docker "functional-204186" container ...
	I1212 00:19:38.379542 1166783 machine.go:88] provisioning docker machine ...
	I1212 00:19:38.379560 1166783 ubuntu.go:169] provisioning hostname "functional-204186"
	I1212 00:19:38.379653 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:38.400809 1166783 main.go:141] libmachine: Using SSH client type: native
	I1212 00:19:38.401311 1166783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34043 <nil> <nil>}
	I1212 00:19:38.401325 1166783 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-204186 && echo "functional-204186" | sudo tee /etc/hostname
	I1212 00:19:38.558711 1166783 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-204186
	
	I1212 00:19:38.558784 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:38.582856 1166783 main.go:141] libmachine: Using SSH client type: native
	I1212 00:19:38.583296 1166783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34043 <nil> <nil>}
	I1212 00:19:38.583341 1166783 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-204186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-204186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-204186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:19:38.724752 1166783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:19:38.724773 1166783 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1135857/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1135857/.minikube}
	I1212 00:19:38.724797 1166783 ubuntu.go:177] setting up certificates
	I1212 00:19:38.724805 1166783 provision.go:83] configureAuth start
	I1212 00:19:38.724870 1166783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-204186
	I1212 00:19:38.743972 1166783 provision.go:138] copyHostCerts
	I1212 00:19:38.744040 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem, removing ...
	I1212 00:19:38.744067 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem
	I1212 00:19:38.744143 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem (1078 bytes)
	I1212 00:19:38.744245 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem, removing ...
	I1212 00:19:38.744249 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem
	I1212 00:19:38.744273 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem (1123 bytes)
	I1212 00:19:38.744330 1166783 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem, removing ...
	I1212 00:19:38.744335 1166783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem
	I1212 00:19:38.744358 1166783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem (1675 bytes)
	I1212 00:19:38.744406 1166783 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem org=jenkins.functional-204186 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-204186]
	I1212 00:19:39.317206 1166783 provision.go:172] copyRemoteCerts
	I1212 00:19:39.317258 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:19:39.317326 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.337099 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.437908 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 00:19:39.468125 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:19:39.498465 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:19:39.527513 1166783 provision.go:86] duration metric: configureAuth took 802.695673ms
	I1212 00:19:39.527531 1166783 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:19:39.527738 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:39.527750 1166783 machine.go:91] provisioned docker machine in 1.148193061s
	I1212 00:19:39.527756 1166783 start.go:300] post-start starting for "functional-204186" (driver="docker")
	I1212 00:19:39.527765 1166783 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:19:39.527814 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:19:39.527849 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.546029 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.646100 1166783 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:19:39.650558 1166783 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:19:39.650583 1166783 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:19:39.650596 1166783 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:19:39.650602 1166783 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:19:39.650611 1166783 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/addons for local assets ...
	I1212 00:19:39.650666 1166783 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/files for local assets ...
	I1212 00:19:39.650748 1166783 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem -> 11412812.pem in /etc/ssl/certs
	I1212 00:19:39.650824 1166783 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts -> hosts in /etc/test/nested/copy/1141281
	I1212 00:19:39.650866 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1141281
	I1212 00:19:39.662106 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:19:39.691766 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts --> /etc/test/nested/copy/1141281/hosts (40 bytes)
	I1212 00:19:39.720033 1166783 start.go:303] post-start completed in 192.262029ms
	I1212 00:19:39.720120 1166783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:19:39.720158 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.738794 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.833772 1166783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:19:39.841224 1166783 fix.go:56] fixHost completed within 1.486270012s
	I1212 00:19:39.841239 1166783 start.go:83] releasing machines lock for "functional-204186", held for 1.486310422s
	I1212 00:19:39.841305 1166783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-204186
	I1212 00:19:39.862046 1166783 ssh_runner.go:195] Run: cat /version.json
	I1212 00:19:39.862101 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.862350 1166783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:19:39.862409 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:39.888120 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.889925 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:19:39.988243 1166783 ssh_runner.go:195] Run: systemctl --version
	I1212 00:19:40.123119 1166783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:19:40.130619 1166783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1212 00:19:40.156770 1166783 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:19:40.156843 1166783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:19:40.168259 1166783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:19:40.168274 1166783 start.go:475] detecting cgroup driver to use...
	I1212 00:19:40.168327 1166783 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:19:40.168376 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:19:40.185332 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:19:40.200998 1166783 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:19:40.201062 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:19:40.219091 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:19:40.236047 1166783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:19:40.377539 1166783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:19:40.507741 1166783 docker.go:219] disabling docker service ...
	I1212 00:19:40.507815 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:19:40.525312 1166783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:19:40.541366 1166783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:19:40.671172 1166783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:19:40.800459 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:19:40.815340 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:19:40.836927 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:19:40.851130 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:19:40.864529 1166783 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:19:40.864600 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:19:40.880806 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:19:40.895794 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:19:40.909131 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:19:40.922165 1166783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:19:40.933419 1166783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:19:40.946768 1166783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:19:40.957556 1166783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:19:40.968029 1166783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:19:41.083587 1166783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:19:41.323366 1166783 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:19:41.323436 1166783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:19:41.331217 1166783 start.go:543] Will wait 60s for crictl version
	I1212 00:19:41.331273 1166783 ssh_runner.go:195] Run: which crictl
	I1212 00:19:41.339039 1166783 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:19:41.383778 1166783 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1212 00:19:41.383846 1166783 ssh_runner.go:195] Run: containerd --version
	I1212 00:19:41.416098 1166783 ssh_runner.go:195] Run: containerd --version
	I1212 00:19:41.448227 1166783 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I1212 00:19:41.450208 1166783 cli_runner.go:164] Run: docker network inspect functional-204186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:19:41.467903 1166783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:19:41.474722 1166783 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:19:41.476837 1166783 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:19:41.476929 1166783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:19:41.521062 1166783 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:19:41.521076 1166783 containerd.go:518] Images already preloaded, skipping extraction
	I1212 00:19:41.521129 1166783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:19:41.563890 1166783 containerd.go:604] all images are preloaded for containerd runtime.
	I1212 00:19:41.563902 1166783 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:19:41.563972 1166783 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:19:41.605056 1166783 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:19:41.605079 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:41.605088 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:41.605097 1166783 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:19:41.605114 1166783 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-204186 NodeName:functional-204186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:19:41.605243 1166783 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-204186"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:19:41.605308 1166783 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-204186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1212 00:19:41.605374 1166783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:19:41.618178 1166783 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:19:41.618260 1166783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:19:41.629314 1166783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I1212 00:19:41.652600 1166783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:19:41.674858 1166783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I1212 00:19:41.697192 1166783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:19:41.701898 1166783 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186 for IP: 192.168.49.2
	I1212 00:19:41.701928 1166783 certs.go:190] acquiring lock for shared ca certs: {Name:mk518d45f153d561b6d30fa5c8435abd4f573517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:41.702088 1166783 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key
	I1212 00:19:41.702139 1166783 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key
	I1212 00:19:41.702240 1166783 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.key
	I1212 00:19:41.702288 1166783 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.key.dd3b5fb2
	I1212 00:19:41.702322 1166783 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.key
	I1212 00:19:41.702433 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem (1338 bytes)
	W1212 00:19:41.702458 1166783 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281_empty.pem, impossibly tiny 0 bytes
	I1212 00:19:41.702465 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:19:41.702492 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:19:41.702516 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:19:41.702537 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem (1675 bytes)
	I1212 00:19:41.702582 1166783 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:19:41.703256 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:19:41.733829 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:19:41.764143 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:19:41.793194 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:19:41.822531 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:19:41.858002 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:19:41.895051 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:19:41.926100 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:19:41.955773 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:19:41.985536 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem --> /usr/share/ca-certificates/1141281.pem (1338 bytes)
	I1212 00:19:42.023297 1166783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /usr/share/ca-certificates/11412812.pem (1708 bytes)
	I1212 00:19:42.056302 1166783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:19:42.081918 1166783 ssh_runner.go:195] Run: openssl version
	I1212 00:19:42.093411 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1141281.pem && ln -fs /usr/share/ca-certificates/1141281.pem /etc/ssl/certs/1141281.pem"
	I1212 00:19:42.109628 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.116307 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:18 /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.116422 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1141281.pem
	I1212 00:19:42.138269 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1141281.pem /etc/ssl/certs/51391683.0"
	I1212 00:19:42.154200 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11412812.pem && ln -fs /usr/share/ca-certificates/11412812.pem /etc/ssl/certs/11412812.pem"
	I1212 00:19:42.169858 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.176203 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:18 /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.176290 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11412812.pem
	I1212 00:19:42.189156 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11412812.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:19:42.205051 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:19:42.222308 1166783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.228709 1166783 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.228802 1166783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:19:42.241802 1166783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:19:42.256130 1166783 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:19:42.262370 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:19:42.272839 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:19:42.283158 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:19:42.292851 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:19:42.302374 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:19:42.312206 1166783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:19:42.322552 1166783 kubeadm.go:404] StartCluster: {Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:19:42.322645 1166783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:19:42.322716 1166783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:19:42.378783 1166783 cri.go:89] found id: "4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783"
	I1212 00:19:42.378798 1166783 cri.go:89] found id: "9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d"
	I1212 00:19:42.378802 1166783 cri.go:89] found id: "4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743"
	I1212 00:19:42.378807 1166783 cri.go:89] found id: "b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e"
	I1212 00:19:42.378810 1166783 cri.go:89] found id: "7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	I1212 00:19:42.378816 1166783 cri.go:89] found id: "f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8"
	I1212 00:19:42.378820 1166783 cri.go:89] found id: "360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b"
	I1212 00:19:42.378823 1166783 cri.go:89] found id: "fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f"
	I1212 00:19:42.378827 1166783 cri.go:89] found id: "8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	I1212 00:19:42.378841 1166783 cri.go:89] found id: ""
	I1212 00:19:42.378903 1166783 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1212 00:19:42.413648 1166783 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b","pid":1280,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b/rootfs","created":"2023-12-12T00:18:43.008599856Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri.sandbox-id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7f532c4a9c9f164eeeacdb7ee8b121ca"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783","pid":2831,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783/rootfs","created":"2023-12-12T00:19:35.214833678Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f4424a2e-f114-46c8-9059-3ddd8cab9386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743","pid":1892,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743/rootfs","created":"2023-12-12T00:19:04.817909252Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","io.kubernetes.cri.sandbox-name":"kindnet-p7qfc","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80d814ed-cb37-4243-97a7-61169cbf7ae7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","pid":1791,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5695af01bb75b84961173de189b46ab680
eebd75505d2f089d6304cab37f944e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e/rootfs","created":"2023-12-12T00:19:04.517306189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-p7qfc_80d814ed-cb37-4243-97a7-61169cbf7ae7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-p7qfc","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80d814ed-cb37-4243-97a7-61169cbf7ae7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","pid":1153,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672/rootfs","created":"2023-12-12T00:18:42.787933141Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-204186_7f532c4a9c9f164eeeacdb7ee8b121ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7f532c4a9c9f164eeeacdb7ee8b121ca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8204ec74c6c75a7ce2f3c9c385
56fda8152667cef4c5fd6f8c1c0281cb1b67e1","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1/rootfs","created":"2023-12-12T00:18:42.941005004Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri.sandbox-id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bb521992826aaef3b829c57d52661ef"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","pid":1676,"status":"running","bundle":"/run/containerd/io.c
ontainerd.runtime.v2.task/k8s.io/91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b/rootfs","created":"2023-12-12T00:19:04.001976537Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f4424a2e-f114-46c8-9059-3ddd8cab9386","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f4424a2e-f114-46c8-9059-3ddd8cab9386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501
af","pid":1144,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af/rootfs","created":"2023-12-12T00:18:42.767256925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-204186_0bb521992826aaef3b829c57d52661ef","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bb521992826aaef3b829c57d52661ef"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","pid":1186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5/rootfs","created":"2023-12-12T00:18:42.825889148Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-204186_d3927b2e4e82a4e18057da3723e43cc0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubern
etes.cri.sandbox-uid":"d3927b2e4e82a4e18057da3723e43cc0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d","pid":2110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d/rootfs","created":"2023-12-12T00:19:19.034742843Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-75jb5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88486eba-5928-4a3b-b0e2-82572161ba5b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a33d
05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","pid":1752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa/rootfs","created":"2023-12-12T00:19:04.385714277Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xn2hr_17a4a16d-a0cd-45c8-bd8c-da9736f87535","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xn2hr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"17a4a16d-a0cd-45c8-bd8c-da9736f87
535"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","pid":1194,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a/rootfs","created":"2023-12-12T00:18:42.861461679Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-204186_fe1cfa1135867fcf7ae120ad770b3e34","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system
","io.kubernetes.cri.sandbox-uid":"fe1cfa1135867fcf7ae120ad770b3e34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e","pid":1817,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e/rootfs","created":"2023-12-12T00:19:04.478952425Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri.sandbox-id":"a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa","io.kubernetes.cri.sandbox-name":"kube-proxy-xn2hr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"17a4a16d-a0cd-45c8-bd8c-da9736f87535"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7
ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","pid":2076,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749/rootfs","created":"2023-12-12T00:19:18.939862893Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-75jb5_88486eba-5928-4a3b-b0e2-82572161ba5b","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-75jb5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88486
eba-5928-4a3b-b0e2-82572161ba5b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8","pid":1332,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8/rootfs","created":"2023-12-12T00:18:43.161380898Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a","io.kubernetes.cri.sandbox-name":"etcd-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fe1cfa1135867fcf7ae120ad770b3e34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc16
3530fa8b5b88342f","pid":1326,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f/rootfs","created":"2023-12-12T00:18:43.150611388Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri.sandbox-id":"9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-204186","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d3927b2e4e82a4e18057da3723e43cc0"},"owner":"root"}]
	I1212 00:19:42.413997 1166783 cri.go:126] list returned 16 containers
	I1212 00:19:42.414006 1166783 cri.go:129] container: {ID:360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b Status:running}
	I1212 00:19:42.414019 1166783 cri.go:135] skipping {360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b running}: state = "running", want "paused"
	I1212 00:19:42.414028 1166783 cri.go:129] container: {ID:4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 Status:running}
	I1212 00:19:42.414035 1166783 cri.go:135] skipping {4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 running}: state = "running", want "paused"
	I1212 00:19:42.414041 1166783 cri.go:129] container: {ID:4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 Status:running}
	I1212 00:19:42.414046 1166783 cri.go:135] skipping {4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 running}: state = "running", want "paused"
	I1212 00:19:42.414052 1166783 cri.go:129] container: {ID:5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e Status:running}
	I1212 00:19:42.414058 1166783 cri.go:131] skipping 5695af01bb75b84961173de189b46ab680eebd75505d2f089d6304cab37f944e - not in ps
	I1212 00:19:42.414062 1166783 cri.go:129] container: {ID:74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672 Status:running}
	I1212 00:19:42.414068 1166783 cri.go:131] skipping 74d87972b5980fa3c381904500a38786a1bfe1b1064e493a906f53b21d610672 - not in ps
	I1212 00:19:42.414073 1166783 cri.go:129] container: {ID:8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 Status:running}
	I1212 00:19:42.414078 1166783 cri.go:135] skipping {8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 running}: state = "running", want "paused"
	I1212 00:19:42.414083 1166783 cri.go:129] container: {ID:91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b Status:running}
	I1212 00:19:42.414089 1166783 cri.go:131] skipping 91c1e8e7481442e6a0f48d54dea00751946fa9cc3112584c7e74bbbde891133b - not in ps
	I1212 00:19:42.414093 1166783 cri.go:129] container: {ID:929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af Status:running}
	I1212 00:19:42.414099 1166783 cri.go:131] skipping 929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af - not in ps
	I1212 00:19:42.414103 1166783 cri.go:129] container: {ID:9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5 Status:running}
	I1212 00:19:42.414111 1166783 cri.go:131] skipping 9832a28d5e6bead165bfe6a134b3cb364236d266b68298e9fb67163efda5e1a5 - not in ps
	I1212 00:19:42.414116 1166783 cri.go:129] container: {ID:9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d Status:running}
	I1212 00:19:42.414121 1166783 cri.go:135] skipping {9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d running}: state = "running", want "paused"
	I1212 00:19:42.414126 1166783 cri.go:129] container: {ID:a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa Status:running}
	I1212 00:19:42.414134 1166783 cri.go:131] skipping a33d05ee8738d48aef576e497b373a8d0ba11ac3d639a80e0ae580d4394e13aa - not in ps
	I1212 00:19:42.414138 1166783 cri.go:129] container: {ID:a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a Status:running}
	I1212 00:19:42.414144 1166783 cri.go:131] skipping a67f2d7f88b7013fb56d07f9df7e4db29c791e5a4daa80ec5f7592079554d84a - not in ps
	I1212 00:19:42.414148 1166783 cri.go:129] container: {ID:b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e Status:running}
	I1212 00:19:42.414154 1166783 cri.go:135] skipping {b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e running}: state = "running", want "paused"
	I1212 00:19:42.414159 1166783 cri.go:129] container: {ID:e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749 Status:running}
	I1212 00:19:42.414165 1166783 cri.go:131] skipping e7ee9926b76665fff90654fb1ebe264f3ee3bf69c44952a756857bc88505a749 - not in ps
	I1212 00:19:42.414169 1166783 cri.go:129] container: {ID:f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 Status:running}
	I1212 00:19:42.414175 1166783 cri.go:135] skipping {f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 running}: state = "running", want "paused"
	I1212 00:19:42.414180 1166783 cri.go:129] container: {ID:fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f Status:running}
	I1212 00:19:42.414185 1166783 cri.go:135] skipping {fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f running}: state = "running", want "paused"
	I1212 00:19:42.414238 1166783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:19:42.426395 1166783 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 00:19:42.426407 1166783 kubeadm.go:636] restartCluster start
	I1212 00:19:42.426463 1166783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:19:42.437760 1166783 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:42.438356 1166783 kubeconfig.go:92] found "functional-204186" server: "https://192.168.49.2:8441"
	I1212 00:19:42.440196 1166783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:19:42.451884 1166783 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-12-12 00:18:34.640409327 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-12-12 00:19:41.687950639 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1212 00:19:42.451896 1166783 kubeadm.go:1135] stopping kube-system containers ...
	I1212 00:19:42.451907 1166783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:19:42.451963 1166783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:19:42.496926 1166783 cri.go:89] found id: "4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783"
	I1212 00:19:42.496941 1166783 cri.go:89] found id: "9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d"
	I1212 00:19:42.496946 1166783 cri.go:89] found id: "4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743"
	I1212 00:19:42.496949 1166783 cri.go:89] found id: "b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e"
	I1212 00:19:42.496952 1166783 cri.go:89] found id: "7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	I1212 00:19:42.496956 1166783 cri.go:89] found id: "f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8"
	I1212 00:19:42.496962 1166783 cri.go:89] found id: "360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b"
	I1212 00:19:42.496966 1166783 cri.go:89] found id: "fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f"
	I1212 00:19:42.496969 1166783 cri.go:89] found id: "8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1"
	I1212 00:19:42.496979 1166783 cri.go:89] found id: ""
	I1212 00:19:42.496984 1166783 cri.go:234] Stopping containers: [4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1]
	I1212 00:19:42.497038 1166783 ssh_runner.go:195] Run: which crictl
	I1212 00:19:42.501723 1166783 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1
	I1212 00:19:47.777226 1166783 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1: (5.275464165s)
	W1212 00:19:47.777279 1166783 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783 9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d 4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743 b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e 7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9 f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8 360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f 8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1: Process exited with status 1
	stdout:
	4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783
	9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d
	4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743
	b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e
	
	stderr:
	E1212 00:19:47.774088    3356 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": not found" containerID="7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9"
	time="2023-12-12T00:19:47Z" level=fatal msg="stopping the container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a44533eaa141d43ad9f9ecc22c098b82bb9394bdc966494186e99cb42f06da9\": not found"
	I1212 00:19:47.777340 1166783 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:19:47.851952 1166783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:19:47.862994 1166783 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Dec 12 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 12 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Dec 12 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 12 00:18 /etc/kubernetes/scheduler.conf
	
	I1212 00:19:47.863057 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:19:47.874186 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:19:47.886147 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:19:47.897962 1166783 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:47.898020 1166783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:19:47.908984 1166783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:19:47.920492 1166783 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:19:47.920556 1166783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:19:47.931644 1166783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:19:47.942812 1166783 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 00:19:47.942838 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:48.021789 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.323474 1166783 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.301659419s)
	I1212 00:19:50.323493 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.533360 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.615860 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:50.703568 1166783 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:19:50.703633 1166783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:19:50.729933 1166783 api_server.go:72] duration metric: took 26.36494ms to wait for apiserver process to appear ...
	I1212 00:19:50.729948 1166783 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:19:50.729964 1166783 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:19:50.741750 1166783 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1212 00:19:50.757498 1166783 api_server.go:141] control plane version: v1.28.4
	I1212 00:19:50.757517 1166783 api_server.go:131] duration metric: took 27.563594ms to wait for apiserver health ...
	I1212 00:19:50.757525 1166783 cni.go:84] Creating CNI manager for ""
	I1212 00:19:50.757531 1166783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:19:50.760139 1166783 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:19:50.762051 1166783 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:19:50.769174 1166783 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:19:50.769199 1166783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:19:50.799044 1166783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:19:51.250997 1166783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:19:51.259526 1166783 system_pods.go:59] 8 kube-system pods found
	I1212 00:19:51.259544 1166783 system_pods.go:61] "coredns-5dd5756b68-75jb5" [88486eba-5928-4a3b-b0e2-82572161ba5b] Running
	I1212 00:19:51.259548 1166783 system_pods.go:61] "etcd-functional-204186" [22eaa66d-9573-4688-a676-a624f562a069] Running
	I1212 00:19:51.259552 1166783 system_pods.go:61] "kindnet-p7qfc" [80d814ed-cb37-4243-97a7-61169cbf7ae7] Running
	I1212 00:19:51.259556 1166783 system_pods.go:61] "kube-apiserver-functional-204186" [69dc4cd3-92c5-4f67-813d-c38849073058] Running
	I1212 00:19:51.259561 1166783 system_pods.go:61] "kube-controller-manager-functional-204186" [d4482b26-8308-4a6f-8efe-dd15c7689236] Running
	I1212 00:19:51.259568 1166783 system_pods.go:61] "kube-proxy-xn2hr" [17a4a16d-a0cd-45c8-bd8c-da9736f87535] Running
	I1212 00:19:51.259572 1166783 system_pods.go:61] "kube-scheduler-functional-204186" [9b16cc61-09fb-4f6c-af03-029249e6bf3d] Running
	I1212 00:19:51.259579 1166783 system_pods.go:61] "storage-provisioner" [f4424a2e-f114-46c8-9059-3ddd8cab9386] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:19:51.259592 1166783 system_pods.go:74] duration metric: took 8.577845ms to wait for pod list to return data ...
	I1212 00:19:51.259600 1166783 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:19:51.262989 1166783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:19:51.263007 1166783 node_conditions.go:123] node cpu capacity is 2
	I1212 00:19:51.263019 1166783 node_conditions.go:105] duration metric: took 3.412063ms to run NodePressure ...
	I1212 00:19:51.263034 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:19:51.488154 1166783 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 00:19:51.492965 1166783 retry.go:31] will retry after 311.086978ms: kubelet not initialised
	I1212 00:19:51.856393 1166783 retry.go:31] will retry after 290.962584ms: kubelet not initialised
	I1212 00:19:52.153722 1166783 kubeadm.go:787] kubelet initialised
	I1212 00:19:52.153732 1166783 kubeadm.go:788] duration metric: took 665.564362ms waiting for restarted kubelet to initialise ...
	I1212 00:19:52.153739 1166783 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:19:52.164434 1166783 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.194792 1166783 pod_ready.go:97] error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.194808 1166783 pod_ready.go:81] duration metric: took 1.030359999s waiting for pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.194819 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-75jb5" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.194842 1166783 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195194 1166783 pod_ready.go:97] error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195209 1166783 pod_ready.go:81] duration metric: took 359.62µs waiting for pod "etcd-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195218 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195237 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195488 1166783 pod_ready.go:97] error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195496 1166783 pod_ready.go:81] duration metric: took 253.176µs waiting for pod "kube-apiserver-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195504 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195523 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.195772 1166783 pod_ready.go:97] error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195781 1166783 pod_ready.go:81] duration metric: took 252.076µs waiting for pod "kube-controller-manager-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.195789 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.195811 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xn2hr" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.196092 1166783 pod_ready.go:97] error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196100 1166783 pod_ready.go:81] duration metric: took 283.485µs waiting for pod "kube-proxy-xn2hr" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.196108 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-xn2hr" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196128 1166783 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-204186" in "kube-system" namespace to be "Ready" ...
	I1212 00:19:53.196388 1166783 pod_ready.go:97] error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196397 1166783 pod_ready.go:81] duration metric: took 241.82µs waiting for pod "kube-scheduler-functional-204186" in "kube-system" namespace to be "Ready" ...
	E1212 00:19:53.196405 1166783 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-204186" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.196422 1166783 pod_ready.go:38] duration metric: took 1.042674859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:19:53.196436 1166783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W1212 00:19:53.205857 1166783 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I1212 00:19:53.205869 1166783 kubeadm.go:640] restartCluster took 10.779458014s
	I1212 00:19:53.205876 1166783 kubeadm.go:406] StartCluster complete in 10.883351408s
	I1212 00:19:53.205889 1166783 settings.go:142] acquiring lock: {Name:mk888158b3cbabbb2583b6a6f74ff62a9621d5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:53.205956 1166783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:19:53.206588 1166783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/kubeconfig: {Name:mkea8ea25a391ae5db2568a02e638c76b0d6995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:19:53.206816 1166783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:19:53.207101 1166783 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:19:53.207263 1166783 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:19:53.207360 1166783 addons.go:69] Setting storage-provisioner=true in profile "functional-204186"
	I1212 00:19:53.207373 1166783 addons.go:231] Setting addon storage-provisioner=true in "functional-204186"
	W1212 00:19:53.207379 1166783 addons.go:240] addon storage-provisioner should already be in state true
	I1212 00:19:53.207445 1166783 host.go:66] Checking if "functional-204186" exists ...
	I1212 00:19:53.207821 1166783 addons.go:69] Setting default-storageclass=true in profile "functional-204186"
	I1212 00:19:53.207836 1166783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-204186"
	I1212 00:19:53.207860 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:53.208104 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	W1212 00:19:53.208431 1166783 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-204186" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.208443 1166783 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.208513 1166783 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:19:53.213129 1166783 out.go:177] * Verifying Kubernetes components...
	I1212 00:19:53.218222 1166783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:19:53.249929 1166783 addons.go:231] Setting addon default-storageclass=true in "functional-204186"
	W1212 00:19:53.249941 1166783 addons.go:240] addon default-storageclass should already be in state true
	I1212 00:19:53.249963 1166783 host.go:66] Checking if "functional-204186" exists ...
	I1212 00:19:53.250432 1166783 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:19:53.268837 1166783 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:19:53.270964 1166783 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:19:53.270978 1166783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:19:53.271046 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:53.291691 1166783 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:19:53.291703 1166783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:19:53.291762 1166783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:19:53.316806 1166783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	E1212 00:19:53.334327 1166783 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1212 00:19:53.334349 1166783 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1212 00:19:53.334363 1166783 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1212 00:19:53.334502 1166783 node_ready.go:35] waiting up to 6m0s for node "functional-204186" to be "Ready" ...
	I1212 00:19:53.334838 1166783 node_ready.go:53] error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.334851 1166783 node_ready.go:38] duration metric: took 337.762µs waiting for node "functional-204186" to be "Ready" ...
	I1212 00:19:53.338808 1166783 out.go:177] 
	W1212 00:19:53.341024 1166783 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-204186": Get "https://192.168.49.2:8441/api/v1/nodes/functional-204186": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:53.341047 1166783 out.go:239] * 
	W1212 00:19:53.342114 1166783 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:19:53.345121 1166783 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0abe98be43702       97e04611ad434       12 seconds ago       Running             coredns                   1                   e7ee9926b7666       coredns-5dd5756b68-75jb5
	35d39f988885b       ba04bb24b9575       12 seconds ago       Running             storage-provisioner       2                   91c1e8e748144       storage-provisioner
	0be1a7ad1cf29       3ca3ca488cf13       12 seconds ago       Running             kube-proxy                1                   a33d05ee8738d       kube-proxy-xn2hr
	11a13b0a859b6       04b4eaa3d3db8       12 seconds ago       Running             kindnet-cni               1                   5695af01bb75b       kindnet-p7qfc
	ec98dd60a37e1       04b4c447bb9d4       13 seconds ago       Exited              kube-apiserver            1                   f426cbf93d3b8       kube-apiserver-functional-204186
	4b48b0124d6a9       ba04bb24b9575       29 seconds ago       Exited              storage-provisioner       1                   91c1e8e748144       storage-provisioner
	9be15f3092c17       97e04611ad434       46 seconds ago       Exited              coredns                   0                   e7ee9926b7666       coredns-5dd5756b68-75jb5
	4c3518b0312ab       04b4eaa3d3db8       About a minute ago   Exited              kindnet-cni               0                   5695af01bb75b       kindnet-p7qfc
	b10a99a14fe0a       3ca3ca488cf13       About a minute ago   Exited              kube-proxy                0                   a33d05ee8738d       kube-proxy-xn2hr
	f1bf1c4332d38       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   a67f2d7f88b70       etcd-functional-204186
	360b7493b53a0       9961cbceaf234       About a minute ago   Running             kube-controller-manager   0                   74d87972b5980       kube-controller-manager-functional-204186
	fdcc7d847a538       05c284c929889       About a minute ago   Running             kube-scheduler            0                   9832a28d5e6be       kube-scheduler-functional-204186
	
	* 
	* ==> containerd <==
	* Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.357740880Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.378507754Z" level=info msg="StartContainer for \"0abe98be437028e07d1455f24fbb28b5834240a4e81135bf4c59ed7090a55ce6\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.394345783Z" level=info msg="StartContainer for \"11a13b0a859b60db60e77078ff0a9c0fd3bf66498e48a7c2d9e4bb2e192725e2\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.415127565Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3873 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.418758457Z" level=info msg="StartContainer for \"0be1a7ad1cf29b232d9b2f17256ac8b257d11a9c4160fe02383d37cae2b804bc\" returns successfully"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.899780450Z" level=info msg="StopContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" with timeout 2 (s)"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.900933615Z" level=info msg="Stop container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" with signal terminated"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957937189Z" level=info msg="shim disconnected" id=929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957985557Z" level=warning msg="cleaning up after shim disconnected" id=929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af namespace=k8s.io
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.957995765Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.979783726Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4061 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.984347576Z" level=info msg="RemoveContainer for \"5b2014e0c953df23c937e404c551a12c7f253506c7e09be700cf94c74ebf812f\""
	Dec 12 00:19:52 functional-204186 containerd[3161]: time="2023-12-12T00:19:52.989771654Z" level=info msg="RemoveContainer for \"5b2014e0c953df23c937e404c551a12c7f253506c7e09be700cf94c74ebf812f\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012811357Z" level=info msg="shim disconnected" id=8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012885572Z" level=warning msg="cleaning up after shim disconnected" id=8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1 namespace=k8s.io
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.012899808Z" level=info msg="cleaning up dead shim"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.024366134Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4086 runtime=io.containerd.runc.v2\n"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.027684749Z" level=info msg="StopContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028454754Z" level=info msg="StopPodSandbox for \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\""
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028552870Z" level=info msg="Container to stop \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028768507Z" level=info msg="TearDown network for sandbox \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\" successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.028788060Z" level=info msg="StopPodSandbox for \"929161a47906f4f4a6da5c6ef00c7562619f88b8b4fab165e7b9922ef4e501af\" returns successfully"
	Dec 12 00:19:53 functional-204186 containerd[3161]: time="2023-12-12T00:19:53.994452895Z" level=info msg="RemoveContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\""
	Dec 12 00:19:54 functional-204186 containerd[3161]: time="2023-12-12T00:19:54.006865232Z" level=info msg="RemoveContainer for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" returns successfully"
	Dec 12 00:19:54 functional-204186 containerd[3161]: time="2023-12-12T00:19:54.010093116Z" level=error msg="ContainerStatus for \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8204ec74c6c75a7ce2f3c9c38556fda8152667cef4c5fd6f8c1c0281cb1b67e1\": not found"
	
	* 
	* ==> coredns [0abe98be437028e07d1455f24fbb28b5834240a4e81135bf4c59ed7090a55ce6] <==
	* [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47675 - 44376 "HINFO IN 8293000808179795944.5557927990981517027. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012825832s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=490": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=495": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [9be15f3092c175de011919c334efa66b1358d455a990173bb566dff1dd0f3e4d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43954 - 51804 "HINFO IN 7258703346742299720.7667376483180762445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013624227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001103] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001114] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +0.004970] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=000000001bb038f1
	[  +0.001195] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000760] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000ce236adb
	[  +0.001178] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +3.628923] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001149] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000c19aa351
	[  +0.001199] FS-Cache: O-key=[8] '4f3e5c0100000000'
	[  +0.000795] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001065] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000a5aa55b4
	[  +0.001178] FS-Cache: N-key=[8] '4f3e5c0100000000'
	[  +0.413575] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000d9ff942f
	[  +0.001137] FS-Cache: O-key=[8] '553e5c0100000000'
	[  +0.000730] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001098] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=000000008357462d
	[  +0.001241] FS-Cache: N-key=[8] '553e5c0100000000'
	
	* 
	* ==> etcd [f1bf1c4332d383706c939be06f1c8bc1995b743e1c0dd6eb54d150bf5efdf0b8] <==
	* {"level":"info","ts":"2023-12-12T00:18:43.480306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-12T00:18:43.487446Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-12T00:18:43.489297Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T00:18:43.489605Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:18:43.495477Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:18:43.496288Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T00:18:43.496421Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:18:44.017135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-12T00:18:44.017597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.017854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:18:44.019999Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-204186 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:18:44.020176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:18:44.025361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-12T00:18:44.025687Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.03183Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:18:44.031933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:18:44.033017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:18:44.043361Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:18:44.075781Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:20:05 up  7:02,  0 users,  load average: 1.72, 1.30, 0.76
	Linux functional-204186 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [11a13b0a859b60db60e77078ff0a9c0fd3bf66498e48a7c2d9e4bb2e192725e2] <==
	* I1212 00:19:52.432050       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:19:52.432116       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:19:52.432252       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:19:52.432267       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:19:52.432281       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:19:52.821571       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:52.821602       1 main.go:227] handling current node
	I1212 00:20:02.926179       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1212 00:20:02.926358       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1212 00:20:03.931828       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kindnet [4c3518b0312ab6a62c9a24bd771d209bded100188a8c2762b98043a46fd26743] <==
	* I1212 00:19:04.921582       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:19:04.921651       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:19:04.921815       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:19:04.921830       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:19:04.921862       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:19:05.417437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:05.417644       1 main.go:227] handling current node
	I1212 00:19:15.430336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:15.430366       1 main.go:227] handling current node
	I1212 00:19:25.443764       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:25.443792       1 main.go:227] handling current node
	I1212 00:19:35.447984       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:35.448411       1 main.go:227] handling current node
	I1212 00:19:45.457404       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:19:45.457445       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2] <==
	* I1212 00:19:52.287856       1 options.go:220] external host was not specified, using 192.168.49.2
	I1212 00:19:52.289191       1 server.go:148] Version: v1.28.4
	I1212 00:19:52.289333       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1212 00:19:52.295465       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [360b7493b53a008e0f02bd40ab6ed74dd0d3c905dacd5e16e9a6416c1c84c68b] <==
	* I1212 00:19:02.236055       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p7qfc"
	I1212 00:19:02.340642       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:19:02.370173       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:19:02.370208       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:19:02.402848       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 00:19:02.492956       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 00:19:02.782766       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zssw5"
	I1212 00:19:02.793880       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-75jb5"
	I1212 00:19:02.839458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="437.553609ms"
	I1212 00:19:02.859056       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-zssw5"
	I1212 00:19:02.870835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.305148ms"
	I1212 00:19:02.890752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.754212ms"
	I1212 00:19:02.911932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.123482ms"
	I1212 00:19:02.912115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.549µs"
	I1212 00:19:04.058663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.954µs"
	I1212 00:19:04.066980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.04µs"
	I1212 00:19:04.074159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.197µs"
	I1212 00:19:19.096666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.778µs"
	I1212 00:19:20.108699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.224824ms"
	I1212 00:19:20.110382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.409µs"
	I1212 00:19:20.110809       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1212 00:19:51.900629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.264291ms"
	I1212 00:19:51.900757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.154µs"
	E1212 00:20:01.879808       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.49.2:8441/api": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:02.347268       1 garbagecollector.go:818] "failed to discover preferred resources" error="Get \"https://192.168.49.2:8441/api\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	* 
	* ==> kube-proxy [0be1a7ad1cf29b232d9b2f17256ac8b257d11a9c4160fe02383d37cae2b804bc] <==
	* I1212 00:19:52.510576       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:19:52.610589       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:19:52.610596       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:19:52.610644       1 shared_informer.go:318] Caches are synced for node config
	W1212 00:19:52.978692       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:52.978734       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:52.978764       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W1212 00:19:53.789245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:53.789317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:54.120973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:54.121036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:54.515895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:54.515945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:56.332508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:56.332565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:56.582357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:56.582417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:19:56.637977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:19:56.638031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:20:00.349012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:20:00.349068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:20:01.229215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:20:01.229259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-204186&resourceVersion=482": dial tcp 192.168.49.2:8441: connect: connection refused
	W1212 00:20:02.215699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	E1212 00:20:02.215758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=495": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-proxy [b10a99a14fe0a3a257a156fc0295b4162e7ac3a32ef92bdba52f976f0bf3653e] <==
	* I1212 00:19:04.559125       1 server_others.go:69] "Using iptables proxy"
	I1212 00:19:04.576426       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:19:04.599481       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:19:04.601715       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:19:04.601899       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:19:04.601984       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:19:04.602161       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:19:04.602462       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:19:04.602795       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:19:04.604008       1 config.go:188] "Starting service config controller"
	I1212 00:19:04.604321       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:19:04.604499       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:19:04.604575       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:19:04.606449       1 config.go:315] "Starting node config controller"
	I1212 00:19:04.606600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:19:04.705005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:19:04.705039       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:19:04.706747       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fdcc7d847a5383f4acac54f7eaed291652519f1c904ccc163530fa8b5b88342f] <==
	* W1212 00:18:46.916161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:18:46.916178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 00:18:46.916225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:18:46.916240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:18:46.916287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:18:46.916302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:18:46.916358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:46.916373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:46.916558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:18:46.916578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:18:47.752991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:47.753339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:47.752996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:18:47.753590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:18:47.793668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:18:47.793878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:18:47.884125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:18:47.884158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 00:18:47.908838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:18:47.909070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:18:47.984156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:18:47.984297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:18:48.049219       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:18:48.049625       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 00:18:50.203649       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.494936    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.495134    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: E1212 00:20:01.541772    3542 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-204186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="800ms"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.858905    3542 scope.go:117] "RemoveContainer" containerID="ec98dd60a37e1076275d5952c3d5e8b7ed319c256c740c8ad18c6f658343b4d2"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: E1212 00:20:01.859460    3542 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-204186.179fed916a8331e2", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-204186", UID:"102fdf4414c3e8f4b2b76c9e617d21ca", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message:"Back-off restarting failed container kube-apiserver in pod kube-apiserver-function
al-204186_kube-system(102fdf4414c3e8f4b2b76c9e617d21ca)", Source:v1.EventSource{Component:"kubelet", Host:"functional-204186"}, FirstTimestamp:time.Date(2023, time.December, 12, 0, 19, 52, 982360546, time.Local), LastTimestamp:time.Date(2023, time.December, 12, 0, 19, 52, 982360546, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-204186"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Dec 12 00:20:01 functional-204186 kubelet[3542]: E1212 00:20:01.859584    3542 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-204186_kube-system(102fdf4414c3e8f4b2b76c9e617d21ca)\"" pod="kube-system/kube-apiserver-functional-204186" podUID="102fdf4414c3e8f4b2b76c9e617d21ca"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.973786    3542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.976496    3542 status_manager.go:853] "Failed to get status for pod" podUID="7f532c4a9c9f164eeeacdb7ee8b121ca" pod="kube-system/kube-controller-manager-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.976901    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.977245    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.977563    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.977911    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.978245    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.978542    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:01 functional-204186 kubelet[3542]: I1212 00:20:01.978868    3542 status_manager.go:853] "Failed to get status for pod" podUID="fe1cfa1135867fcf7ae120ad770b3e34" pod="kube-system/etcd-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.023366    3542 status_manager.go:853] "Failed to get status for pod" podUID="d3927b2e4e82a4e18057da3723e43cc0" pod="kube-system/kube-scheduler-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.023785    3542 status_manager.go:853] "Failed to get status for pod" podUID="f4424a2e-f114-46c8-9059-3ddd8cab9386" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.024095    3542 status_manager.go:853] "Failed to get status for pod" podUID="17a4a16d-a0cd-45c8-bd8c-da9736f87535" pod="kube-system/kube-proxy-xn2hr" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xn2hr\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.025115    3542 status_manager.go:853] "Failed to get status for pod" podUID="88486eba-5928-4a3b-b0e2-82572161ba5b" pod="kube-system/coredns-5dd5756b68-75jb5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-75jb5\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.025469    3542 status_manager.go:853] "Failed to get status for pod" podUID="80d814ed-cb37-4243-97a7-61169cbf7ae7" pod="kube-system/kindnet-p7qfc" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-p7qfc\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.025818    3542 status_manager.go:853] "Failed to get status for pod" podUID="102fdf4414c3e8f4b2b76c9e617d21ca" pod="kube-system/kube-apiserver-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.026140    3542 status_manager.go:853] "Failed to get status for pod" podUID="fe1cfa1135867fcf7ae120ad770b3e34" pod="kube-system/etcd-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: I1212 00:20:02.026478    3542 status_manager.go:853] "Failed to get status for pod" podUID="7f532c4a9c9f164eeeacdb7ee8b121ca" pod="kube-system/kube-controller-manager-functional-204186" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-204186\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 12 00:20:02 functional-204186 kubelet[3542]: E1212 00:20:02.342936    3542 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-204186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="1.6s"
	Dec 12 00:20:03 functional-204186 kubelet[3542]: E1212 00:20:03.944258    3542 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-204186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="3.2s"
	
	* 
	* ==> storage-provisioner [35d39f988885b4b77d9b8fd4e6fd28e8cd51d3db66cdc048ce6c8b9a7ab9d5d3] <==
	* I1212 00:19:52.288774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:19:52.305023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:19:52.305089       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1212 00:19:55.759801       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1212 00:20:00.029628       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1212 00:20:03.625369       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [4b48b0124d6a9cc95c8dd20fee91e70c633d02892468152efa4afd46f74f0783] <==
	* I1212 00:19:35.251844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:19:35.267237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:19:35.267809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:19:35.292941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:19:35.293350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59!
	I1212 00:19:35.293575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dce0da4-55da-4699-8143-5da0ecbb7ad6", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59 became leader
	I1212 00:19:35.394357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-204186_dd73c0bf-f445-4f72-a0d2-65024ea73d59!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:20:05.283974 1169654 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-204186 -n functional-204186
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-204186 -n functional-204186: exit status 2 (539.231163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-204186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image load --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 image load --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr: (3.739600905s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-204186" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-204186 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1436: (dbg) Non-zero exit: kubectl --context functional-204186 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (79.927971ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1442: failed to create hello-node deployment with this command "kubectl --context functional-204186 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 service list
functional_test.go:1458: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 service list: exit status 119 (397.488173ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-204186"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-204186"

                                                
                                                
** /stderr **
functional_test.go:1460: failed to do service list. args "out/minikube-linux-arm64 -p functional-204186 service list" : exit status 119
functional_test.go:1463: expected 'service list' to contain *hello-node* but got -"* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-204186\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 service list -o json
functional_test.go:1488: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 service list -o json: exit status 119 (427.672656ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-204186"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-204186"

                                                
                                                
** /stderr **
functional_test.go:1490: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-204186 service list -o json": exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 service --namespace=default --https --url hello-node: exit status 119 (415.160257ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-204186"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-204186"

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-linux-arm64 -p functional-204186 service --namespace=default --https --url hello-node" : exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image load --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 image load --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr: (3.971654722s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-204186" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 service hello-node --url --format={{.IP}}: exit status 119 (380.679812ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-204186"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-204186"

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-204186 service hello-node --url --format={{.IP}}": exit status 119
functional_test.go:1547: "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-204186\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 service hello-node --url: exit status 119 (394.633275ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-204186"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-204186"

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-linux-arm64 -p functional-204186 service hello-node --url": exit status 119
functional_test.go:1564: found endpoint for hello-node: * This control plane is not running! (state=Stopped)
To start a cluster, run: "minikube start -p functional-204186"
functional_test.go:1568: failed to parse "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-204186\"": parse "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-204186\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.073355396s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-204186
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image load --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 image load --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr: (3.762569189s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-204186" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image save gcr.io/google-containers/addon-resizer:functional-204186 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1212 00:20:22.260702 1171586 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:20:22.261447 1171586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:22.261465 1171586 out.go:309] Setting ErrFile to fd 2...
	I1212 00:20:22.261473 1171586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:22.261809 1171586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:20:22.262620 1171586 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:20:22.262828 1171586 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:20:22.263566 1171586 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
	I1212 00:20:22.282655 1171586 ssh_runner.go:195] Run: systemctl --version
	I1212 00:20:22.282763 1171586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
	I1212 00:20:22.301048 1171586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
	I1212 00:20:22.397159 1171586 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1212 00:20:22.397217 1171586 cache_images.go:254] Failed to load cached images for profile functional-204186. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1212 00:20:22.397236 1171586 cache_images.go:262] succeeded pushing to: 
	I1212 00:20:22.397242 1171586 cache_images.go:263] failed pushing to: functional-204186

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-491046 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-491046 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.945184373s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-491046 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-491046 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e824b3e1-2b09-4e79-8583-5d7e6c9a39b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e824b3e1-2b09-4e79-8583-5d7e6c9a39b4] Running
E1212 00:24:12.324609 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.016040278s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-491046 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.02340137s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons disable ingress-dns --alsologtostderr -v=1: (3.426389263s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons disable ingress --alsologtostderr -v=1: (7.593925104s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-491046
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-491046:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc4573440cae04adcbe33a561e6921b5d18d978ea14f2ca852c68e440c485b03",
	        "Created": "2023-12-12T00:22:25.471395565Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1176086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:22:25.770616279Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/bc4573440cae04adcbe33a561e6921b5d18d978ea14f2ca852c68e440c485b03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc4573440cae04adcbe33a561e6921b5d18d978ea14f2ca852c68e440c485b03/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc4573440cae04adcbe33a561e6921b5d18d978ea14f2ca852c68e440c485b03/hosts",
	        "LogPath": "/var/lib/docker/containers/bc4573440cae04adcbe33a561e6921b5d18d978ea14f2ca852c68e440c485b03/bc4573440cae04adcbe33a561e6921b5d18d978ea14f2ca852c68e440c485b03-json.log",
	        "Name": "/ingress-addon-legacy-491046",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-491046:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-491046",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6f060993762396a73cf5a79466068bf7b07df1df979e04c495a0491465a72a50-init/diff:/var/lib/docker/overlay2/83f94b9f515065f4cf4d4337d1fbe3fc13b585131a89a52ad8eb2b6bf7d119ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f060993762396a73cf5a79466068bf7b07df1df979e04c495a0491465a72a50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f060993762396a73cf5a79466068bf7b07df1df979e04c495a0491465a72a50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f060993762396a73cf5a79466068bf7b07df1df979e04c495a0491465a72a50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-491046",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-491046/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-491046",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-491046",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-491046",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d6fd73517e549ab96c2688a37e5c59b327dcaed2c4f2624be8488fa8d1d4132",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34047"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d6fd73517e5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-491046": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bc4573440cae",
	                        "ingress-addon-legacy-491046"
	                    ],
	                    "NetworkID": "b1568d41d79473a4261f849d57f0469f7b28130d88f8656f3287bb719871b1a2",
	                    "EndpointID": "76c689418d5f4e8f9124d675b014e3d725e4de66f839f9fbc68989aa59c279c3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-491046 -n ingress-addon-legacy-491046
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 logs -n 25
E1212 00:24:40.015297 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-491046 logs -n 25: (1.473443878s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-204186 ssh findmnt        | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-204186 ssh findmnt        | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-204186                 | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| start          | -p functional-204186                 | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| start          | -p functional-204186                 | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| start          | -p functional-204186                 | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | -p functional-204186                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-204186 ssh pgrep          | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-204186 image build -t     | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | localhost/my-image:functional-204186 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-204186 image ls           | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	| image          | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-204186                    | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:21 UTC | 12 Dec 23 00:21 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-204186                 | functional-204186           | jenkins | v1.32.0 | 12 Dec 23 00:22 UTC | 12 Dec 23 00:22 UTC |
	| start          | -p ingress-addon-legacy-491046       | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:22 UTC | 12 Dec 23 00:23 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-491046          | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:23 UTC | 12 Dec 23 00:23 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-491046          | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:23 UTC | 12 Dec 23 00:23 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-491046          | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:24 UTC | 12 Dec 23 00:24 UTC |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-491046 ip       | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:24 UTC | 12 Dec 23 00:24 UTC |
	| addons         | ingress-addon-legacy-491046          | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:24 UTC | 12 Dec 23 00:24 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-491046          | ingress-addon-legacy-491046 | jenkins | v1.32.0 | 12 Dec 23 00:24 UTC | 12 Dec 23 00:24 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:22:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:22:05.818047 1175628 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:22:05.818244 1175628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:05.818253 1175628 out.go:309] Setting ErrFile to fd 2...
	I1212 00:22:05.818259 1175628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:05.818525 1175628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:22:05.818963 1175628 out.go:303] Setting JSON to false
	I1212 00:22:05.819888 1175628 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25473,"bootTime":1702315053,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:22:05.819961 1175628 start.go:138] virtualization:  
	I1212 00:22:05.824378 1175628 out.go:177] * [ingress-addon-legacy-491046] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:22:05.826533 1175628 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:22:05.826738 1175628 notify.go:220] Checking for updates...
	I1212 00:22:05.831516 1175628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:22:05.833754 1175628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:22:05.835782 1175628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:22:05.838288 1175628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:22:05.840489 1175628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:22:05.843820 1175628 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:22:05.869191 1175628 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:22:05.869303 1175628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:22:05.953923 1175628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:22:05.944164542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:22:05.954033 1175628 docker.go:295] overlay module found
	I1212 00:22:05.956725 1175628 out.go:177] * Using the docker driver based on user configuration
	I1212 00:22:05.958935 1175628 start.go:298] selected driver: docker
	I1212 00:22:05.958957 1175628 start.go:902] validating driver "docker" against <nil>
	I1212 00:22:05.958980 1175628 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:22:05.959790 1175628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:22:06.029935 1175628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:22:06.01966463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:22:06.030099 1175628 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:22:06.030348 1175628 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:22:06.032703 1175628 out.go:177] * Using Docker driver with root privileges
	I1212 00:22:06.035849 1175628 cni.go:84] Creating CNI manager for ""
	I1212 00:22:06.035879 1175628 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:22:06.035894 1175628 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:22:06.035907 1175628 start_flags.go:323] config:
	{Name:ingress-addon-legacy-491046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-491046 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:22:06.038555 1175628 out.go:177] * Starting control plane node ingress-addon-legacy-491046 in cluster ingress-addon-legacy-491046
	I1212 00:22:06.041491 1175628 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:22:06.043853 1175628 out.go:177] * Pulling base image ...
	I1212 00:22:06.046538 1175628 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1212 00:22:06.046577 1175628 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:22:06.066805 1175628 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:22:06.066833 1175628 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:22:06.121028 1175628 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1212 00:22:06.121068 1175628 cache.go:56] Caching tarball of preloaded images
	I1212 00:22:06.121269 1175628 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1212 00:22:06.123760 1175628 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 00:22:06.126201 1175628 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:22:06.239975 1175628 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1212 00:22:17.582869 1175628 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:22:17.582977 1175628 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:22:18.775979 1175628 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1212 00:22:18.776393 1175628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/config.json ...
	I1212 00:22:18.776427 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/config.json: {Name:mk9cdf605701df908cad5489ec647c032a4ba564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:18.776592 1175628 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:22:18.776650 1175628 start.go:365] acquiring machines lock for ingress-addon-legacy-491046: {Name:mk4c3a955c3218af528b89a3ee45cc27a4dae3a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:22:18.776693 1175628 start.go:369] acquired machines lock for "ingress-addon-legacy-491046" in 33.838µs
	I1212 00:22:18.776711 1175628 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-491046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-491046 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:22:18.776776 1175628 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:22:18.779229 1175628 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 00:22:18.779536 1175628 start.go:159] libmachine.API.Create for "ingress-addon-legacy-491046" (driver="docker")
	I1212 00:22:18.779568 1175628 client.go:168] LocalClient.Create starting
	I1212 00:22:18.779647 1175628 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem
	I1212 00:22:18.779696 1175628 main.go:141] libmachine: Decoding PEM data...
	I1212 00:22:18.779714 1175628 main.go:141] libmachine: Parsing certificate...
	I1212 00:22:18.779781 1175628 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem
	I1212 00:22:18.779803 1175628 main.go:141] libmachine: Decoding PEM data...
	I1212 00:22:18.779816 1175628 main.go:141] libmachine: Parsing certificate...
	I1212 00:22:18.780281 1175628 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-491046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:22:18.801024 1175628 cli_runner.go:211] docker network inspect ingress-addon-legacy-491046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:22:18.801108 1175628 network_create.go:281] running [docker network inspect ingress-addon-legacy-491046] to gather additional debugging logs...
	I1212 00:22:18.801129 1175628 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-491046
	W1212 00:22:18.825698 1175628 cli_runner.go:211] docker network inspect ingress-addon-legacy-491046 returned with exit code 1
	I1212 00:22:18.825733 1175628 network_create.go:284] error running [docker network inspect ingress-addon-legacy-491046]: docker network inspect ingress-addon-legacy-491046: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-491046 not found
	I1212 00:22:18.825750 1175628 network_create.go:286] output of [docker network inspect ingress-addon-legacy-491046]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-491046 not found
	
	** /stderr **
	I1212 00:22:18.825860 1175628 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:22:18.843997 1175628 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400059b270}
	I1212 00:22:18.844035 1175628 network_create.go:124] attempt to create docker network ingress-addon-legacy-491046 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:22:18.844092 1175628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-491046 ingress-addon-legacy-491046
	I1212 00:22:18.915362 1175628 network_create.go:108] docker network ingress-addon-legacy-491046 192.168.49.0/24 created
	I1212 00:22:18.915391 1175628 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-491046" container
	I1212 00:22:18.915472 1175628 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:22:18.933222 1175628 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-491046 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-491046 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:22:18.951786 1175628 oci.go:103] Successfully created a docker volume ingress-addon-legacy-491046
	I1212 00:22:18.951874 1175628 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-491046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-491046 --entrypoint /usr/bin/test -v ingress-addon-legacy-491046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:22:20.494218 1175628 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-491046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-491046 --entrypoint /usr/bin/test -v ingress-addon-legacy-491046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib: (1.542300628s)
	I1212 00:22:20.494247 1175628 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-491046
	I1212 00:22:20.494266 1175628 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1212 00:22:20.494284 1175628 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:22:20.494376 1175628 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-491046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:22:25.387032 1175628 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-491046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.892614182s)
	I1212 00:22:25.387065 1175628 kic.go:203] duration metric: took 4.892778 seconds to extract preloaded images to volume
	W1212 00:22:25.387204 1175628 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:22:25.387340 1175628 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:22:25.455194 1175628 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-491046 --name ingress-addon-legacy-491046 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-491046 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-491046 --network ingress-addon-legacy-491046 --ip 192.168.49.2 --volume ingress-addon-legacy-491046:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:22:25.780421 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Running}}
	I1212 00:22:25.807947 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Status}}
	I1212 00:22:25.837133 1175628 cli_runner.go:164] Run: docker exec ingress-addon-legacy-491046 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:22:25.908280 1175628 oci.go:144] the created container "ingress-addon-legacy-491046" has a running status.
	I1212 00:22:25.908311 1175628 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa...
	I1212 00:22:26.632751 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:22:26.632866 1175628 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:22:26.664204 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Status}}
	I1212 00:22:26.689751 1175628 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:22:26.689771 1175628 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-491046 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:22:26.780056 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Status}}
	I1212 00:22:26.806739 1175628 machine.go:88] provisioning docker machine ...
	I1212 00:22:26.806780 1175628 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-491046"
	I1212 00:22:26.806849 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:26.827664 1175628 main.go:141] libmachine: Using SSH client type: native
	I1212 00:22:26.829129 1175628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34048 <nil> <nil>}
	I1212 00:22:26.829153 1175628 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-491046 && echo "ingress-addon-legacy-491046" | sudo tee /etc/hostname
	I1212 00:22:26.993602 1175628 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-491046
	
	I1212 00:22:26.993687 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:27.022368 1175628 main.go:141] libmachine: Using SSH client type: native
	I1212 00:22:27.023650 1175628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34048 <nil> <nil>}
	I1212 00:22:27.023686 1175628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-491046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-491046/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-491046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:22:27.168558 1175628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:22:27.168594 1175628 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1135857/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1135857/.minikube}
	I1212 00:22:27.168614 1175628 ubuntu.go:177] setting up certificates
	I1212 00:22:27.168623 1175628 provision.go:83] configureAuth start
	I1212 00:22:27.168690 1175628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-491046
	I1212 00:22:27.186767 1175628 provision.go:138] copyHostCerts
	I1212 00:22:27.186808 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem
	I1212 00:22:27.186843 1175628 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem, removing ...
	I1212 00:22:27.186858 1175628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem
	I1212 00:22:27.186938 1175628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/cert.pem (1123 bytes)
	I1212 00:22:27.187035 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem
	I1212 00:22:27.187056 1175628 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem, removing ...
	I1212 00:22:27.187066 1175628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem
	I1212 00:22:27.187095 1175628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/key.pem (1675 bytes)
	I1212 00:22:27.187148 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem
	I1212 00:22:27.187169 1175628 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem, removing ...
	I1212 00:22:27.187176 1175628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem
	I1212 00:22:27.187202 1175628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.pem (1078 bytes)
	I1212 00:22:27.187259 1175628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-491046 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-491046]
	I1212 00:22:27.674407 1175628 provision.go:172] copyRemoteCerts
	I1212 00:22:27.674481 1175628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:22:27.674531 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:27.692131 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:22:27.794058 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:22:27.794127 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:22:27.823528 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:22:27.823591 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1212 00:22:27.852606 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:22:27.852681 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:22:27.882641 1175628 provision.go:86] duration metric: configureAuth took 713.99891ms
	I1212 00:22:27.882678 1175628 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:22:27.882904 1175628 config.go:182] Loaded profile config "ingress-addon-legacy-491046": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1212 00:22:27.882919 1175628 machine.go:91] provisioned docker machine in 1.076162414s
	I1212 00:22:27.882926 1175628 client.go:171] LocalClient.Create took 9.103352807s
	I1212 00:22:27.882945 1175628 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-491046" took 9.103410505s
	I1212 00:22:27.882957 1175628 start.go:300] post-start starting for "ingress-addon-legacy-491046" (driver="docker")
	I1212 00:22:27.882967 1175628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:22:27.883024 1175628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:22:27.883073 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:27.901776 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:22:28.006971 1175628 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:22:28.011936 1175628 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:22:28.011976 1175628 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:22:28.011992 1175628 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:22:28.011999 1175628 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:22:28.012010 1175628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/addons for local assets ...
	I1212 00:22:28.012079 1175628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1135857/.minikube/files for local assets ...
	I1212 00:22:28.012169 1175628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem -> 11412812.pem in /etc/ssl/certs
	I1212 00:22:28.012181 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem -> /etc/ssl/certs/11412812.pem
	I1212 00:22:28.012297 1175628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:22:28.023395 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:22:28.053113 1175628 start.go:303] post-start completed in 170.140314ms
	I1212 00:22:28.053497 1175628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-491046
	I1212 00:22:28.071357 1175628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/config.json ...
	I1212 00:22:28.071653 1175628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:22:28.071708 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:28.089291 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:22:28.185458 1175628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:22:28.191364 1175628 start.go:128] duration metric: createHost completed in 9.414570838s
	I1212 00:22:28.191388 1175628 start.go:83] releasing machines lock for "ingress-addon-legacy-491046", held for 9.414687096s
	I1212 00:22:28.191460 1175628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-491046
	I1212 00:22:28.209419 1175628 ssh_runner.go:195] Run: cat /version.json
	I1212 00:22:28.209468 1175628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:22:28.209539 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:28.209472 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:22:28.237289 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:22:28.239457 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:22:28.466325 1175628 ssh_runner.go:195] Run: systemctl --version
	I1212 00:22:28.472411 1175628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:22:28.478027 1175628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1212 00:22:28.510652 1175628 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:22:28.510777 1175628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:22:28.545109 1175628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:22:28.545180 1175628 start.go:475] detecting cgroup driver to use...
	I1212 00:22:28.545228 1175628 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:22:28.545306 1175628 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:22:28.560233 1175628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:22:28.574317 1175628 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:22:28.574406 1175628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:22:28.591491 1175628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:22:28.608384 1175628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:22:28.713129 1175628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:22:28.820414 1175628 docker.go:219] disabling docker service ...
	I1212 00:22:28.820532 1175628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:22:28.842704 1175628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:22:28.857745 1175628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:22:28.954872 1175628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:22:29.048757 1175628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:22:29.062737 1175628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:22:29.083013 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1212 00:22:29.095087 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:22:29.108440 1175628 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:22:29.108539 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:22:29.121107 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:22:29.133524 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:22:29.145952 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:22:29.158708 1175628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:22:29.170261 1175628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:22:29.183125 1175628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:22:29.193984 1175628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:22:29.204184 1175628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:22:29.302927 1175628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:22:29.449426 1175628 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:22:29.449501 1175628 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:22:29.454170 1175628 start.go:543] Will wait 60s for crictl version
	I1212 00:22:29.454237 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:29.458597 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:22:29.500738 1175628 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1212 00:22:29.500860 1175628 ssh_runner.go:195] Run: containerd --version
	I1212 00:22:29.529999 1175628 ssh_runner.go:195] Run: containerd --version
	I1212 00:22:29.565448 1175628 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.26 ...
	I1212 00:22:29.567406 1175628 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-491046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:22:29.584912 1175628 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:22:29.589754 1175628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:22:29.603649 1175628 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1212 00:22:29.603721 1175628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:22:29.647724 1175628 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 00:22:29.647798 1175628 ssh_runner.go:195] Run: which lz4
	I1212 00:22:29.652580 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1212 00:22:29.652682 1175628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 00:22:29.657195 1175628 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:22:29.657228 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1212 00:22:31.714613 1175628 containerd.go:547] Took 2.061967 seconds to copy over tarball
	I1212 00:22:31.714733 1175628 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:22:34.455957 1175628 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.741175866s)
	I1212 00:22:34.456023 1175628 containerd.go:554] Took 2.741338 seconds to extract the tarball
	I1212 00:22:34.456038 1175628 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:22:34.542266 1175628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:22:34.649195 1175628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:22:34.787413 1175628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:22:34.833889 1175628 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 00:22:34.833914 1175628 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:22:34.833959 1175628 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:22:34.834009 1175628 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:22:34.834168 1175628 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 00:22:34.834212 1175628 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:22:34.834266 1175628 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:22:34.834334 1175628 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 00:22:34.834404 1175628 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:22:34.834515 1175628 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:22:34.836036 1175628 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:22:34.836049 1175628 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:22:34.836160 1175628 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:22:34.836393 1175628 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 00:22:34.836454 1175628 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:22:34.836605 1175628 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:22:34.836673 1175628 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:22:34.836820 1175628 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 00:22:35.155888 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c"
	I1212 00:22:35.155966 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.187027 1175628 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.187199 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03"
	I1212 00:22:35.187259 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.187887 1175628 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.188009 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7"
	I1212 00:22:35.188052 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.194023 1175628 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.194203 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257"
	I1212 00:22:35.194272 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.203336 1175628 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.203519 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c"
	I1212 00:22:35.203595 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.221278 1175628 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.221502 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18"
	I1212 00:22:35.221571 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.248322 1175628 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.248496 1175628 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79"
	I1212 00:22:35.248572 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1212 00:22:35.399002 1175628 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1212 00:22:35.399174 1175628 containerd.go:251] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1212 00:22:35.399245 1175628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1212 00:22:35.409771 1175628 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1212 00:22:35.409853 1175628 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 00:22:35.409930 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:35.796455 1175628 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1212 00:22:35.796611 1175628 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:22:35.796686 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:35.796545 1175628 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1212 00:22:35.796789 1175628 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:22:35.796831 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:35.944782 1175628 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1212 00:22:35.944868 1175628 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:22:35.944953 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:35.972497 1175628 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1212 00:22:35.972596 1175628 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 00:22:35.972672 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:36.100021 1175628 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1212 00:22:36.100108 1175628 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:22:36.100195 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:36.121706 1175628 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1212 00:22:36.122259 1175628 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1212 00:22:36.122284 1175628 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:22:36.122342 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:36.122444 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:22:36.122517 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 00:22:36.122573 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:22:36.122634 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:22:36.122712 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 00:22:36.122778 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:22:36.121753 1175628 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:22:36.122849 1175628 ssh_runner.go:195] Run: which crictl
	I1212 00:22:36.214588 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1212 00:22:36.214746 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:22:36.297739 1175628 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:22:36.297884 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1212 00:22:36.297946 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 00:22:36.298001 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 00:22:36.298058 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1212 00:22:36.298110 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 00:22:36.342417 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 00:22:36.350582 1175628 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 00:22:36.350635 1175628 cache_images.go:92] LoadImages completed in 1.516707688s
	W1212 00:22:36.350696 1175628 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1212 00:22:36.350759 1175628 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:22:36.396457 1175628 cni.go:84] Creating CNI manager for ""
	I1212 00:22:36.396481 1175628 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:22:36.396512 1175628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:22:36.396535 1175628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-491046 NodeName:ingress-addon-legacy-491046 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 00:22:36.396667 1175628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-491046"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:22:36.396732 1175628 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-491046 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-491046 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:22:36.396797 1175628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 00:22:36.407233 1175628 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:22:36.407391 1175628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:22:36.417669 1175628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1212 00:22:36.437760 1175628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 00:22:36.458945 1175628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1212 00:22:36.479996 1175628 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:22:36.484505 1175628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:22:36.497795 1175628 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046 for IP: 192.168.49.2
	I1212 00:22:36.497866 1175628 certs.go:190] acquiring lock for shared ca certs: {Name:mk518d45f153d561b6d30fa5c8435abd4f573517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:36.498012 1175628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key
	I1212 00:22:36.498089 1175628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key
	I1212 00:22:36.498199 1175628 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.key
	I1212 00:22:36.498235 1175628 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt with IP's: []
	I1212 00:22:37.304106 1175628 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt ...
	I1212 00:22:37.304141 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: {Name:mkb0643958820f6ae570df3d9dd4ade6e41ac318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:37.304819 1175628 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.key ...
	I1212 00:22:37.304842 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.key: {Name:mk1f20c041323ab4f5949872f11f7797f22036dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:37.304983 1175628 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key.dd3b5fb2
	I1212 00:22:37.305022 1175628 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:22:38.467350 1175628 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt.dd3b5fb2 ...
	I1212 00:22:38.467383 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt.dd3b5fb2: {Name:mk7e68f495ee6119dc5bd573c23e2949dfe3f941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:38.467565 1175628 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key.dd3b5fb2 ...
	I1212 00:22:38.467579 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key.dd3b5fb2: {Name:mke78ab8275a83ad203f462c0ea1ebc643778c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:38.467664 1175628 certs.go:337] copying /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt
	I1212 00:22:38.467747 1175628 certs.go:341] copying /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key
	I1212 00:22:38.467809 1175628 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.key
	I1212 00:22:38.467826 1175628 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.crt with IP's: []
	I1212 00:22:38.864805 1175628 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.crt ...
	I1212 00:22:38.864838 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.crt: {Name:mk81cc0e18b9bfce0bfaee89baad222d460f08bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:38.865029 1175628 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.key ...
	I1212 00:22:38.865045 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.key: {Name:mk1fe00eb4d2d9edda6de8d17f753772b96d8fc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:22:38.865135 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:22:38.865156 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:22:38.865169 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:22:38.865197 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:22:38.865213 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:22:38.865227 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:22:38.865242 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:22:38.865253 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:22:38.865311 1175628 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem (1338 bytes)
	W1212 00:22:38.865352 1175628 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281_empty.pem, impossibly tiny 0 bytes
	I1212 00:22:38.865366 1175628 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:22:38.865391 1175628 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:22:38.865420 1175628 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:22:38.865449 1175628 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/certs/key.pem (1675 bytes)
	I1212 00:22:38.865501 1175628 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem (1708 bytes)
	I1212 00:22:38.865532 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:22:38.865550 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem -> /usr/share/ca-certificates/1141281.pem
	I1212 00:22:38.865565 1175628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem -> /usr/share/ca-certificates/11412812.pem
	I1212 00:22:38.866214 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:22:38.895636 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:22:38.925305 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:22:38.953717 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:22:38.983094 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:22:39.014406 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:22:39.046885 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:22:39.078104 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:22:39.108253 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:22:39.138893 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/certs/1141281.pem --> /usr/share/ca-certificates/1141281.pem (1338 bytes)
	I1212 00:22:39.167365 1175628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/ssl/certs/11412812.pem --> /usr/share/ca-certificates/11412812.pem (1708 bytes)
	I1212 00:22:39.195716 1175628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:22:39.217167 1175628 ssh_runner.go:195] Run: openssl version
	I1212 00:22:39.224129 1175628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1141281.pem && ln -fs /usr/share/ca-certificates/1141281.pem /etc/ssl/certs/1141281.pem"
	I1212 00:22:39.235784 1175628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1141281.pem
	I1212 00:22:39.240399 1175628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:18 /usr/share/ca-certificates/1141281.pem
	I1212 00:22:39.240491 1175628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1141281.pem
	I1212 00:22:39.249008 1175628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1141281.pem /etc/ssl/certs/51391683.0"
	I1212 00:22:39.260584 1175628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11412812.pem && ln -fs /usr/share/ca-certificates/11412812.pem /etc/ssl/certs/11412812.pem"
	I1212 00:22:39.271895 1175628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11412812.pem
	I1212 00:22:39.276468 1175628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:18 /usr/share/ca-certificates/11412812.pem
	I1212 00:22:39.276529 1175628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11412812.pem
	I1212 00:22:39.284928 1175628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11412812.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:22:39.296225 1175628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:22:39.307831 1175628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:22:39.312856 1175628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:22:39.312926 1175628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:22:39.321333 1175628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:22:39.332955 1175628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:22:39.337475 1175628 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:22:39.337549 1175628 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-491046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-491046 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:22:39.337632 1175628 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:22:39.337698 1175628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:22:39.377665 1175628 cri.go:89] found id: ""
	I1212 00:22:39.377784 1175628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:22:39.388211 1175628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:22:39.398884 1175628 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:22:39.399000 1175628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:22:39.409450 1175628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:22:39.409493 1175628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:22:39.463849 1175628 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 00:22:39.464151 1175628 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:22:39.517388 1175628 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:22:39.517465 1175628 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:22:39.517507 1175628 kubeadm.go:322] OS: Linux
	I1212 00:22:39.517557 1175628 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:22:39.517607 1175628 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:22:39.517654 1175628 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:22:39.517703 1175628 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:22:39.517752 1175628 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:22:39.517803 1175628 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:22:39.605459 1175628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:22:39.605589 1175628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:22:39.605689 1175628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:22:39.843473 1175628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:22:39.845036 1175628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:22:39.845285 1175628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:22:39.962187 1175628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:22:39.979056 1175628 out.go:204]   - Generating certificates and keys ...
	I1212 00:22:39.979206 1175628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:22:39.979273 1175628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:22:40.376746 1175628 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:22:41.107648 1175628 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:22:41.319787 1175628 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:22:41.725809 1175628 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:22:42.150701 1175628 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:22:42.151383 1175628 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-491046 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:22:42.320275 1175628 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:22:42.320744 1175628 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-491046 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:22:42.991270 1175628 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:22:43.420100 1175628 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:22:43.643756 1175628 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:22:43.644273 1175628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:22:44.322009 1175628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:22:44.526303 1175628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:22:44.885932 1175628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:22:45.196574 1175628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:22:45.197468 1175628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:22:45.200650 1175628 out.go:204]   - Booting up control plane ...
	I1212 00:22:45.200768 1175628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:22:45.211741 1175628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:22:45.215825 1175628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:22:45.215922 1175628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:22:45.219342 1175628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:22:56.724005 1175628 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502669 seconds
	I1212 00:22:56.724162 1175628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:22:56.738200 1175628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:22:57.270919 1175628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:22:57.271098 1175628 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-491046 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 00:22:57.779161 1175628 kubeadm.go:322] [bootstrap-token] Using token: xr63nw.3aghf6dso49hsmpy
	I1212 00:22:57.781362 1175628 out.go:204]   - Configuring RBAC rules ...
	I1212 00:22:57.781504 1175628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:22:57.787158 1175628 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:22:57.795096 1175628 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:22:57.801723 1175628 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:22:57.809192 1175628 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:22:57.813405 1175628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:22:57.826590 1175628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:22:58.115668 1175628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:22:58.233557 1175628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:22:58.233581 1175628 kubeadm.go:322] 
	I1212 00:22:58.233666 1175628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:22:58.233706 1175628 kubeadm.go:322] 
	I1212 00:22:58.233802 1175628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:22:58.233815 1175628 kubeadm.go:322] 
	I1212 00:22:58.233863 1175628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:22:58.233936 1175628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:22:58.234012 1175628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:22:58.234024 1175628 kubeadm.go:322] 
	I1212 00:22:58.234091 1175628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:22:58.234208 1175628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:22:58.234287 1175628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:22:58.234296 1175628 kubeadm.go:322] 
	I1212 00:22:58.234382 1175628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:22:58.234462 1175628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:22:58.234470 1175628 kubeadm.go:322] 
	I1212 00:22:58.234556 1175628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xr63nw.3aghf6dso49hsmpy \
	I1212 00:22:58.234666 1175628 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5475a393936b6bc511cacca1c76e18c5ea4ff503b753104aaff3ee2c1a2497ed \
	I1212 00:22:58.234695 1175628 kubeadm.go:322]     --control-plane 
	I1212 00:22:58.234704 1175628 kubeadm.go:322] 
	I1212 00:22:58.234789 1175628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:22:58.234798 1175628 kubeadm.go:322] 
	I1212 00:22:58.234881 1175628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xr63nw.3aghf6dso49hsmpy \
	I1212 00:22:58.234990 1175628 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5475a393936b6bc511cacca1c76e18c5ea4ff503b753104aaff3ee2c1a2497ed 
	I1212 00:22:58.238676 1175628 kubeadm.go:322] W1212 00:22:39.463018    1091 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 00:22:58.238893 1175628 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:22:58.238995 1175628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:22:58.239123 1175628 kubeadm.go:322] W1212 00:22:45.212093    1091 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 00:22:58.239244 1175628 kubeadm.go:322] W1212 00:22:45.213790    1091 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 00:22:58.239261 1175628 cni.go:84] Creating CNI manager for ""
	I1212 00:22:58.239272 1175628 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:22:58.241553 1175628 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:22:58.243583 1175628 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:22:58.249011 1175628 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1212 00:22:58.249046 1175628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:22:58.285141 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:22:58.768751 1175628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:22:58.768841 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:22:58.768928 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=ingress-addon-legacy-491046 minikube.k8s.io/updated_at=2023_12_12T00_22_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:22:58.785249 1175628 ops.go:34] apiserver oom_adj: -16
	I1212 00:22:58.960304 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:22:59.061864 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:22:59.676508 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:00.176679 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:00.676059 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:01.176638 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:01.676573 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:02.176979 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:02.676219 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:03.176020 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:03.676826 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:04.176886 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:04.676552 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:05.176935 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:05.676274 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:06.176525 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:06.676754 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:07.176094 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:07.676516 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:08.176774 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:08.676867 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:09.176062 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:09.676814 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:10.176900 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:10.676724 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:11.176449 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:11.676509 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:12.176523 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:12.676781 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:13.176547 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:13.676517 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:14.176510 1175628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:23:14.281680 1175628 kubeadm.go:1088] duration metric: took 15.512912786s to wait for elevateKubeSystemPrivileges.
	I1212 00:23:14.281709 1175628 kubeadm.go:406] StartCluster complete in 34.944164226s
	I1212 00:23:14.281726 1175628 settings.go:142] acquiring lock: {Name:mk888158b3cbabbb2583b6a6f74ff62a9621d5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:23:14.281786 1175628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:23:14.282479 1175628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1135857/kubeconfig: {Name:mkea8ea25a391ae5db2568a02e638c76b0d6995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:23:14.283158 1175628 kapi.go:59] client config for ingress-addon-legacy-491046: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:23:14.283393 1175628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:23:14.283656 1175628 config.go:182] Loaded profile config "ingress-addon-legacy-491046": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1212 00:23:14.283799 1175628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:23:14.283866 1175628 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-491046"
	I1212 00:23:14.283884 1175628 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-491046"
	I1212 00:23:14.283919 1175628 host.go:66] Checking if "ingress-addon-legacy-491046" exists ...
	I1212 00:23:14.284371 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Status}}
	I1212 00:23:14.284921 1175628 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:23:14.285360 1175628 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-491046"
	I1212 00:23:14.285376 1175628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-491046"
	I1212 00:23:14.285670 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Status}}
	I1212 00:23:14.356861 1175628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:23:14.359515 1175628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:23:14.359534 1175628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:23:14.359589 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:23:14.356844 1175628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-491046" context rescaled to 1 replicas
	I1212 00:23:14.363422 1175628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:23:14.359286 1175628 kapi.go:59] client config for ingress-addon-legacy-491046: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:23:14.365803 1175628 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-491046"
	I1212 00:23:14.365836 1175628 host.go:66] Checking if "ingress-addon-legacy-491046" exists ...
	I1212 00:23:14.366109 1175628 out.go:177] * Verifying Kubernetes components...
	I1212 00:23:14.368040 1175628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:23:14.366558 1175628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-491046 --format={{.State.Status}}
	I1212 00:23:14.396993 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:23:14.411918 1175628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:23:14.411938 1175628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:23:14.411996 1175628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-491046
	I1212 00:23:14.444936 1175628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/ingress-addon-legacy-491046/id_rsa Username:docker}
	I1212 00:23:14.602835 1175628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:23:14.603878 1175628 kapi.go:59] client config for ingress-addon-legacy-491046: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:23:14.604191 1175628 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-491046" to be "Ready" ...
	I1212 00:23:14.610694 1175628 node_ready.go:49] node "ingress-addon-legacy-491046" has status "Ready":"True"
	I1212 00:23:14.610723 1175628 node_ready.go:38] duration metric: took 6.500543ms waiting for node "ingress-addon-legacy-491046" to be "Ready" ...
	I1212 00:23:14.610734 1175628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:23:14.625722 1175628 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-q79bm" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:14.693883 1175628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:23:14.712726 1175628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:23:15.343905 1175628 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:23:15.510385 1175628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:23:15.512580 1175628 addons.go:502] enable addons completed in 1.228785274s: enabled=[storage-provisioner default-storageclass]
	I1212 00:23:16.639177 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:19.138357 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:21.138959 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:23.639144 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:26.138635 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:28.139250 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:30.141250 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:32.639016 1175628 pod_ready.go:102] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"False"
	I1212 00:23:33.639239 1175628 pod_ready.go:92] pod "coredns-66bff467f8-q79bm" in "kube-system" namespace has status "Ready":"True"
	I1212 00:23:33.639268 1175628 pod_ready.go:81] duration metric: took 19.013468378s waiting for pod "coredns-66bff467f8-q79bm" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.639280 1175628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.643689 1175628 pod_ready.go:92] pod "etcd-ingress-addon-legacy-491046" in "kube-system" namespace has status "Ready":"True"
	I1212 00:23:33.643713 1175628 pod_ready.go:81] duration metric: took 4.425915ms waiting for pod "etcd-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.643728 1175628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.648092 1175628 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-491046" in "kube-system" namespace has status "Ready":"True"
	I1212 00:23:33.648119 1175628 pod_ready.go:81] duration metric: took 4.383249ms waiting for pod "kube-apiserver-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.648131 1175628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.652782 1175628 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-491046" in "kube-system" namespace has status "Ready":"True"
	I1212 00:23:33.652805 1175628 pod_ready.go:81] duration metric: took 4.665864ms waiting for pod "kube-controller-manager-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.652821 1175628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-th826" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.657936 1175628 pod_ready.go:92] pod "kube-proxy-th826" in "kube-system" namespace has status "Ready":"True"
	I1212 00:23:33.657966 1175628 pod_ready.go:81] duration metric: took 5.134119ms waiting for pod "kube-proxy-th826" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.657995 1175628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:33.834431 1175628 request.go:629] Waited for 176.320597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-491046
	I1212 00:23:34.034611 1175628 request.go:629] Waited for 197.344601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-491046
	I1212 00:23:34.037680 1175628 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-491046" in "kube-system" namespace has status "Ready":"True"
	I1212 00:23:34.037704 1175628 pod_ready.go:81] duration metric: took 379.700636ms waiting for pod "kube-scheduler-ingress-addon-legacy-491046" in "kube-system" namespace to be "Ready" ...
	I1212 00:23:34.037715 1175628 pod_ready.go:38] duration metric: took 19.426969424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:23:34.037730 1175628 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:23:34.037806 1175628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:34.052250 1175628 api_server.go:72] duration metric: took 19.688777831s to wait for apiserver process to appear ...
	I1212 00:23:34.052277 1175628 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:23:34.052299 1175628 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:23:34.061374 1175628 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:23:34.062449 1175628 api_server.go:141] control plane version: v1.18.20
	I1212 00:23:34.062475 1175628 api_server.go:131] duration metric: took 10.190371ms to wait for apiserver health ...
	I1212 00:23:34.062486 1175628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:23:34.234900 1175628 request.go:629] Waited for 172.319549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:23:34.241089 1175628 system_pods.go:59] 8 kube-system pods found
	I1212 00:23:34.241126 1175628 system_pods.go:61] "coredns-66bff467f8-q79bm" [aad18ef3-b73e-4987-9743-8e2f01a10a8c] Running
	I1212 00:23:34.241134 1175628 system_pods.go:61] "etcd-ingress-addon-legacy-491046" [0f25bd26-a959-40d0-87c5-39adf11f9165] Running
	I1212 00:23:34.241139 1175628 system_pods.go:61] "kindnet-66ptk" [d9d20cd2-7d6a-4b1f-bd28-76745f8b89d7] Running
	I1212 00:23:34.241145 1175628 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-491046" [284d2e9c-e75e-4f76-bbf0-408ddb5a3183] Running
	I1212 00:23:34.241155 1175628 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-491046" [913ae909-df7a-45ed-b12b-583435aa0a7c] Running
	I1212 00:23:34.241160 1175628 system_pods.go:61] "kube-proxy-th826" [d3b285b3-db23-47ad-849a-9f6de75693b4] Running
	I1212 00:23:34.241165 1175628 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-491046" [957263bc-3701-4b9e-9ce4-22ec69011e44] Running
	I1212 00:23:34.241169 1175628 system_pods.go:61] "storage-provisioner" [bd7d20a6-cbf9-4138-afd5-dcbd8c6361e5] Running
	I1212 00:23:34.241176 1175628 system_pods.go:74] duration metric: took 178.6836ms to wait for pod list to return data ...
	I1212 00:23:34.241188 1175628 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:23:34.434933 1175628 request.go:629] Waited for 193.672503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:23:34.437311 1175628 default_sa.go:45] found service account: "default"
	I1212 00:23:34.437339 1175628 default_sa.go:55] duration metric: took 196.141139ms for default service account to be created ...
	I1212 00:23:34.437350 1175628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:23:34.634777 1175628 request.go:629] Waited for 197.367116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:23:34.641549 1175628 system_pods.go:86] 8 kube-system pods found
	I1212 00:23:34.641584 1175628 system_pods.go:89] "coredns-66bff467f8-q79bm" [aad18ef3-b73e-4987-9743-8e2f01a10a8c] Running
	I1212 00:23:34.641592 1175628 system_pods.go:89] "etcd-ingress-addon-legacy-491046" [0f25bd26-a959-40d0-87c5-39adf11f9165] Running
	I1212 00:23:34.641598 1175628 system_pods.go:89] "kindnet-66ptk" [d9d20cd2-7d6a-4b1f-bd28-76745f8b89d7] Running
	I1212 00:23:34.641605 1175628 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-491046" [284d2e9c-e75e-4f76-bbf0-408ddb5a3183] Running
	I1212 00:23:34.641618 1175628 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-491046" [913ae909-df7a-45ed-b12b-583435aa0a7c] Running
	I1212 00:23:34.641627 1175628 system_pods.go:89] "kube-proxy-th826" [d3b285b3-db23-47ad-849a-9f6de75693b4] Running
	I1212 00:23:34.641633 1175628 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-491046" [957263bc-3701-4b9e-9ce4-22ec69011e44] Running
	I1212 00:23:34.641642 1175628 system_pods.go:89] "storage-provisioner" [bd7d20a6-cbf9-4138-afd5-dcbd8c6361e5] Running
	I1212 00:23:34.641649 1175628 system_pods.go:126] duration metric: took 204.294134ms to wait for k8s-apps to be running ...
	I1212 00:23:34.641670 1175628 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:23:34.641728 1175628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:23:34.656208 1175628 system_svc.go:56] duration metric: took 14.527228ms WaitForService to wait for kubelet.
	I1212 00:23:34.656237 1175628 kubeadm.go:581] duration metric: took 20.292772161s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:23:34.656257 1175628 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:23:34.834659 1175628 request.go:629] Waited for 178.3344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1212 00:23:34.837721 1175628 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:23:34.837754 1175628 node_conditions.go:123] node cpu capacity is 2
	I1212 00:23:34.837767 1175628 node_conditions.go:105] duration metric: took 181.504997ms to run NodePressure ...
	I1212 00:23:34.837778 1175628 start.go:228] waiting for startup goroutines ...
	I1212 00:23:34.837820 1175628 start.go:233] waiting for cluster config update ...
	I1212 00:23:34.837831 1175628 start.go:242] writing updated cluster config ...
	I1212 00:23:34.838132 1175628 ssh_runner.go:195] Run: rm -f paused
	I1212 00:23:34.897337 1175628 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 00:23:34.899929 1175628 out.go:177] 
	W1212 00:23:34.902315 1175628 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 00:23:34.904797 1175628 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 00:23:34.907038 1175628 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-491046" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	375ea241c03ce       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app           2                   ab2b304aed24f       hello-world-app-5f5d8b66bb-nmz6n
	fb315b2e023a6       f09fc93534f6a       34 seconds ago       Running             nginx                     0                   492c75cbfb57f       nginx
	c07072fe8f7f6       d7f0cba3aa5bf       56 seconds ago       Exited              controller                0                   91f0786fd025b       ingress-nginx-controller-7fcf777cb7-gmqrw
	b895f95ebb80d       a883f7fc35610       About a minute ago   Exited              patch                     0                   c482d0cc72321       ingress-nginx-admission-patch-tfmgd
	f58d2e201ff60       a883f7fc35610       About a minute ago   Exited              create                    0                   519aedfe29351       ingress-nginx-admission-create-gq9jf
	6269909858447       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   5c00f8d50835b       coredns-66bff467f8-q79bm
	07a1f5b4169b0       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   54d294c572c46       storage-provisioner
	5639742c2fe4a       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   66d3ac60239ee       kindnet-66ptk
	6e60ac5af8786       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   e816452bf8a18       kube-proxy-th826
	f74774f87147f       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   ce50bd097ac69       kube-controller-manager-ingress-addon-legacy-491046
	b5c8a627099c6       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   2d08e1b329b99       kube-apiserver-ingress-addon-legacy-491046
	8e6e75a6dff8f       095f37015706d       About a minute ago   Running             kube-scheduler            0                   18a9400d9ebc0       kube-scheduler-ingress-addon-legacy-491046
	dd6f3566f71ee       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   d04fd448f4d58       etcd-ingress-addon-legacy-491046
	
	* 
	* ==> containerd <==
	* Dec 12 00:24:30 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:30.885557827Z" level=info msg="RemoveContainer for \"83a2dca71ee5a9063f8d4d3774f13927ec2af95b009ec31c76d165fa26d1037c\" returns successfully"
	Dec 12 00:24:32 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:32.587760058Z" level=info msg="StopContainer for \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" with timeout 2 (s)"
	Dec 12 00:24:32 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:32.588777331Z" level=info msg="Stop container \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" with signal terminated"
	Dec 12 00:24:32 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:32.617704437Z" level=info msg="StopContainer for \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" with timeout 2 (s)"
	Dec 12 00:24:32 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:32.618568382Z" level=info msg="Skipping the sending of signal terminated to container \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" because a prior stop with timeout>0 request already sent the signal"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.612324684Z" level=info msg="Kill container \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\""
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.618967839Z" level=info msg="Kill container \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\""
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.708444670Z" level=info msg="shim disconnected" id=c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.708516103Z" level=warning msg="cleaning up after shim disconnected" id=c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021 namespace=k8s.io
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.708527878Z" level=info msg="cleaning up dead shim"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.719743293Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:24:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4578 runtime=io.containerd.runc.v2\n"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.723006589Z" level=info msg="StopContainer for \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" returns successfully"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.723135581Z" level=info msg="StopContainer for \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" returns successfully"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.723743110Z" level=info msg="StopPodSandbox for \"91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a\""
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.723809119Z" level=info msg="Container to stop \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.726162063Z" level=info msg="StopPodSandbox for \"91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a\""
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.726296922Z" level=info msg="Container to stop \"c07072fe8f7f6f8be2085fc09f15c5fbfa04bdaea9e89e319d08017ad1982021\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.760455845Z" level=info msg="shim disconnected" id=91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.760528230Z" level=warning msg="cleaning up after shim disconnected" id=91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a namespace=k8s.io
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.760539930Z" level=info msg="cleaning up dead shim"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.771480369Z" level=warning msg="cleanup warnings time=\"2023-12-12T00:24:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4614 runtime=io.containerd.runc.v2\n"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.822690999Z" level=info msg="TearDown network for sandbox \"91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a\" successfully"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.822740607Z" level=info msg="StopPodSandbox for \"91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a\" returns successfully"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.828105961Z" level=info msg="TearDown network for sandbox \"91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a\" successfully"
	Dec 12 00:24:34 ingress-addon-legacy-491046 containerd[826]: time="2023-12-12T00:24:34.828154477Z" level=info msg="StopPodSandbox for \"91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a\" returns successfully"
	
	* 
	* ==> coredns [6269909858447a7177015570281661d4255991f82d1edcc754a5628255e81a80] <==
	* [INFO] 10.244.0.5:43740 - 19310 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052176s
	[INFO] 10.244.0.5:43740 - 65178 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002445546s
	[INFO] 10.244.0.5:43740 - 18389 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00177118s
	[INFO] 10.244.0.5:43740 - 13422 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000225869s
	[INFO] 10.244.0.5:43893 - 48703 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00478857s
	[INFO] 10.244.0.5:43893 - 22802 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000906718s
	[INFO] 10.244.0.5:43893 - 28587 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066182s
	[INFO] 10.244.0.5:39398 - 41755 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000216967s
	[INFO] 10.244.0.5:60609 - 3751 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000556247s
	[INFO] 10.244.0.5:39398 - 63570 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093997s
	[INFO] 10.244.0.5:60609 - 32227 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000279859s
	[INFO] 10.244.0.5:60609 - 20113 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000173881s
	[INFO] 10.244.0.5:39398 - 20338 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057313s
	[INFO] 10.244.0.5:39398 - 2774 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041435s
	[INFO] 10.244.0.5:60609 - 14415 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032894s
	[INFO] 10.244.0.5:60609 - 52964 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037111s
	[INFO] 10.244.0.5:39398 - 30291 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005988s
	[INFO] 10.244.0.5:60609 - 23219 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000112984s
	[INFO] 10.244.0.5:39398 - 56799 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035914s
	[INFO] 10.244.0.5:60609 - 12417 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002853314s
	[INFO] 10.244.0.5:39398 - 45926 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003138563s
	[INFO] 10.244.0.5:39398 - 14143 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001695497s
	[INFO] 10.244.0.5:60609 - 56451 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002723576s
	[INFO] 10.244.0.5:60609 - 31440 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047023s
	[INFO] 10.244.0.5:39398 - 24705 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038974s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-491046
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-491046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=ingress-addon-legacy-491046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_22_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:22:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-491046
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:24:31 +0000   Tue, 12 Dec 2023 00:22:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:24:31 +0000   Tue, 12 Dec 2023 00:22:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:24:31 +0000   Tue, 12 Dec 2023 00:22:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:24:31 +0000   Tue, 12 Dec 2023 00:23:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-491046
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 4442999208894abbb5ebbbd700088e1a
	  System UUID:                77572dac-97bb-4e72-bb6c-cb7dd4c20346
	  Boot ID:                    6562b840-385e-4140-a0d3-196e503f4900
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-nmz6n                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-66bff467f8-q79bm                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-491046                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-66ptk                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-491046             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-491046    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-th826                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-491046             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  112s (x4 over 112s)  kubelet     Node ingress-addon-legacy-491046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x4 over 112s)  kubelet     Node ingress-addon-legacy-491046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x3 over 112s)  kubelet     Node ingress-addon-legacy-491046 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-491046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-491046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-491046 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-491046 status is now: NodeReady
	  Normal  Starting                 86s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001091] FS-Cache: O-key=[8] '0c405c0100000000'
	[  +0.000771] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=000000006279b80e
	[  +0.001091] FS-Cache: N-key=[8] '0c405c0100000000'
	[  +0.002699] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000c02127f9
	[  +0.001091] FS-Cache: O-key=[8] '0c405c0100000000'
	[  +0.000771] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=0000000049369323
	[  +0.001091] FS-Cache: N-key=[8] '0c405c0100000000'
	[  +2.038727] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001005] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=00000000ce236adb
	[  +0.001132] FS-Cache: O-key=[8] '0b405c0100000000'
	[  +0.000730] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=000000006279b80e
	[  +0.001086] FS-Cache: N-key=[8] '0b405c0100000000'
	[  +0.318515] FS-Cache: Duplicate cookie detected
	[  +0.000765] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001006] FS-Cache: O-cookie d=000000004e4a3ca3{9p.inode} n=0000000061c57b4c
	[  +0.001217] FS-Cache: O-key=[8] '11405c0100000000'
	[  +0.000731] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=000000004e4a3ca3{9p.inode} n=00000000813b8d63
	[  +0.001098] FS-Cache: N-key=[8] '11405c0100000000'
	
	* 
	* ==> etcd [dd6f3566f71ee2c1fce2403e2c47d9a66a1e2281853179ac3ab642dcf7df6f52] <==
	* raft2023/12/12 00:22:49 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-12 00:22:49.440533 W | auth: simple token is not cryptographically signed
	2023-12-12 00:22:49.443571 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-12 00:22:49.447846 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 00:22:49.447987 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 00:22:49.448114 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-12 00:22:49.448185 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-12 00:22:49.448665 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/12 00:22:49 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/12 00:22:49 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-12 00:22:49.983499 I | etcdserver: published {Name:ingress-addon-legacy-491046 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-12 00:22:50.035477 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 00:22:50.087389 I | embed: ready to serve client requests
	2023-12-12 00:22:50.108889 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-12 00:22:50.187422 I | embed: ready to serve client requests
	2023-12-12 00:22:50.245521 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 00:22:50.259401 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 00:22:50.339894 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 00:22:52.192816 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" limit:10000 " with result "range_response_count:0 size:4" took too long (105.163189ms) to execute
	
	* 
	* ==> kernel <==
	*  00:24:40 up  7:07,  0 users,  load average: 0.77, 1.26, 0.90
	Linux ingress-addon-legacy-491046 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5639742c2fe4a856656d077f0e1ff89e4029fba46f09c578679f4d83aa0fa26d] <==
	* I1212 00:23:16.121773       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:23:16.121838       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:23:16.121960       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:23:16.121970       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:23:16.121981       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:23:16.518450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:16.518653       1 main.go:227] handling current node
	I1212 00:23:26.630655       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:26.630687       1 main.go:227] handling current node
	I1212 00:23:36.644208       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:36.644396       1 main.go:227] handling current node
	I1212 00:23:46.651926       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:46.652151       1 main.go:227] handling current node
	I1212 00:23:56.655720       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:56.655750       1 main.go:227] handling current node
	I1212 00:24:06.659534       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:24:06.659563       1 main.go:227] handling current node
	I1212 00:24:16.663123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:24:16.663153       1 main.go:227] handling current node
	I1212 00:24:26.666294       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:24:26.666325       1 main.go:227] handling current node
	I1212 00:24:36.675631       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:24:36.675663       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b5c8a627099c6141787d880150baaabc6b7b3f076dc3a38b16e635af13c18f2b] <==
	* E1212 00:22:55.237367       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1212 00:22:55.257405       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 00:22:55.279161       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 00:22:55.280459       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:22:55.282395       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:22:55.282855       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:22:56.075433       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 00:22:56.075462       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 00:22:56.082687       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 00:22:56.087025       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:22:56.087248       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 00:22:56.519334       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:22:56.565920       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 00:22:56.650689       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1212 00:22:56.652096       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 00:22:56.656021       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:22:57.502210       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1212 00:22:58.100305       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 00:22:58.211760       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 00:23:01.497553       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:23:13.134805       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 00:23:13.432623       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:23:35.816510       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1212 00:24:04.103989       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1212 00:24:31.622354       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x400f5f1908), encoder:(*versioning.codec)(0x4011513e00), buf:(*bytes.Buffer)(0x4011202f00)})
	
	* 
	* ==> kube-controller-manager [f74774f87147fcb3ccabb5d4e4ca63e88ebeafba0bbec39f29899b5995bae5bd] <==
	* I1212 00:23:13.477176       1 shared_informer.go:230] Caches are synced for TTL 
	I1212 00:23:13.480975       1 shared_informer.go:230] Caches are synced for persistent volume 
	E1212 00:23:13.517132       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"14b9c3bc-2dc6-4f7e-96d8-2444767cdd8a", ResourceVersion:"198", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63837937378, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001569100), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4001569120)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001569140), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x400029f3c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4001569160), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001569180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40015691c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001241d10), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000723a78), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000350380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f6f8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000723af8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1212 00:23:13.527449       1 shared_informer.go:230] Caches are synced for attach detach 
	I1212 00:23:13.626795       1 shared_informer.go:230] Caches are synced for job 
	I1212 00:23:13.684655       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 00:23:13.701900       1 shared_informer.go:230] Caches are synced for stateful set 
	I1212 00:23:13.724139       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 00:23:13.724169       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 00:23:13.726360       1 shared_informer.go:230] Caches are synced for disruption 
	I1212 00:23:13.726379       1 disruption.go:339] Sending events to api server.
	I1212 00:23:13.731253       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 00:23:13.774986       1 shared_informer.go:230] Caches are synced for namespace 
	I1212 00:23:13.776858       1 shared_informer.go:230] Caches are synced for service account 
	I1212 00:23:13.785676       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 00:23:14.360886       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1932cfc9-b93c-4e41-8dea-6b6910376aa9", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1212 00:23:14.451012       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"cec82e4d-59ba-4348-bfe9-d70baa957d7e", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-67csp
	I1212 00:23:35.805585       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0623eddc-18bb-40d8-a356-f7961d6d1b92", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 00:23:35.824038       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"657807c0-dc31-4c83-aeed-0a67059c9900", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-gmqrw
	I1212 00:23:35.840651       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9dd4c74d-ea09-4f21-8dbe-ebdf19f1aab7", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-gq9jf
	I1212 00:23:35.918506       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"154ad324-0d69-4f70-923e-0337bc798227", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tfmgd
	I1212 00:23:38.746152       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9dd4c74d-ea09-4f21-8dbe-ebdf19f1aab7", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 00:23:38.768171       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"154ad324-0d69-4f70-923e-0337bc798227", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 00:24:12.889457       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"2fc65c64-22c4-4ba4-9d74-528fb17e4da8", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1212 00:24:12.893779       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"9287a202-6650-4be2-ace7-764ccf13365c", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-nmz6n
	
	* 
	* ==> kube-proxy [6e60ac5af8786591c244b6afb5bd562c9d48c2acd6f0b424b81ee1f97d9764cc] <==
	* W1212 00:23:14.140825       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 00:23:14.152232       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1212 00:23:14.152280       1 server_others.go:186] Using iptables Proxier.
	I1212 00:23:14.152702       1 server.go:583] Version: v1.18.20
	I1212 00:23:14.154225       1 config.go:133] Starting endpoints config controller
	I1212 00:23:14.154384       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 00:23:14.154533       1 config.go:315] Starting service config controller
	I1212 00:23:14.154595       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 00:23:14.254614       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1212 00:23:14.254783       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8e6e75a6dff8f0076c424f961032d8a46e4fd17a955850d12b8b0d0b93804b1b] <==
	* W1212 00:22:55.224737       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:22:55.255162       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 00:22:55.255582       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 00:22:55.258971       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 00:22:55.260239       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:22:55.260357       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:22:55.260488       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 00:22:55.271672       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:22:55.272223       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:22:55.272462       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:22:55.272652       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:22:55.272852       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:22:55.273063       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:22:55.273291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:22:55.273480       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:22:55.273674       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:22:55.273865       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:22:55.274056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:22:55.274248       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:22:56.170386       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:22:56.221248       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:22:56.230993       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:22:56.249017       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:22:56.312745       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1212 00:22:59.260596       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 12 00:24:16 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:16.838478    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 46e6da132be53358d083fadf393aca5d4906bcc5481c19feade526d17fd3a8c3
	Dec 12 00:24:16 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:16.839853    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 83a2dca71ee5a9063f8d4d3774f13927ec2af95b009ec31c76d165fa26d1037c
	Dec 12 00:24:16 ingress-addon-legacy-491046 kubelet[1613]: E1212 00:24:16.840247    1613 pod_workers.go:191] Error syncing pod e4c9294f-1d7e-4a68-ab78-93f9300d038c ("hello-world-app-5f5d8b66bb-nmz6n_default(e4c9294f-1d7e-4a68-ab78-93f9300d038c)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nmz6n_default(e4c9294f-1d7e-4a68-ab78-93f9300d038c)"
	Dec 12 00:24:17 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:17.842418    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 83a2dca71ee5a9063f8d4d3774f13927ec2af95b009ec31c76d165fa26d1037c
	Dec 12 00:24:17 ingress-addon-legacy-491046 kubelet[1613]: E1212 00:24:17.842663    1613 pod_workers.go:191] Error syncing pod e4c9294f-1d7e-4a68-ab78-93f9300d038c ("hello-world-app-5f5d8b66bb-nmz6n_default(e4c9294f-1d7e-4a68-ab78-93f9300d038c)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nmz6n_default(e4c9294f-1d7e-4a68-ab78-93f9300d038c)"
	Dec 12 00:24:24 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:24.602658    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e6dca27a8f489b6379082bf52b1d597f9f9f2f8802652e6ee647b45a9c2a871b
	Dec 12 00:24:24 ingress-addon-legacy-491046 kubelet[1613]: E1212 00:24:24.603553    1613 pod_workers.go:191] Error syncing pod 6eba7850-846c-4438-9806-7149d0b72a73 ("kube-ingress-dns-minikube_kube-system(6eba7850-846c-4438-9806-7149d0b72a73)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(6eba7850-846c-4438-9806-7149d0b72a73)"
	Dec 12 00:24:28 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:28.872521    1613 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-8hsrd" (UniqueName: "kubernetes.io/secret/6eba7850-846c-4438-9806-7149d0b72a73-minikube-ingress-dns-token-8hsrd") pod "6eba7850-846c-4438-9806-7149d0b72a73" (UID: "6eba7850-846c-4438-9806-7149d0b72a73")
	Dec 12 00:24:28 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:28.876886    1613 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba7850-846c-4438-9806-7149d0b72a73-minikube-ingress-dns-token-8hsrd" (OuterVolumeSpecName: "minikube-ingress-dns-token-8hsrd") pod "6eba7850-846c-4438-9806-7149d0b72a73" (UID: "6eba7850-846c-4438-9806-7149d0b72a73"). InnerVolumeSpecName "minikube-ingress-dns-token-8hsrd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 00:24:28 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:28.972939    1613 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-8hsrd" (UniqueName: "kubernetes.io/secret/6eba7850-846c-4438-9806-7149d0b72a73-minikube-ingress-dns-token-8hsrd") on node "ingress-addon-legacy-491046" DevicePath ""
	Dec 12 00:24:29 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:29.865900    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e6dca27a8f489b6379082bf52b1d597f9f9f2f8802652e6ee647b45a9c2a871b
	Dec 12 00:24:30 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:30.602650    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 83a2dca71ee5a9063f8d4d3774f13927ec2af95b009ec31c76d165fa26d1037c
	Dec 12 00:24:30 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:30.870055    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 83a2dca71ee5a9063f8d4d3774f13927ec2af95b009ec31c76d165fa26d1037c
	Dec 12 00:24:30 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:30.870374    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 375ea241c03ce69f57c7c534b9850194b62644290906d6c9fdf8efe5e1437f89
	Dec 12 00:24:30 ingress-addon-legacy-491046 kubelet[1613]: E1212 00:24:30.870627    1613 pod_workers.go:191] Error syncing pod e4c9294f-1d7e-4a68-ab78-93f9300d038c ("hello-world-app-5f5d8b66bb-nmz6n_default(e4c9294f-1d7e-4a68-ab78-93f9300d038c)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nmz6n_default(e4c9294f-1d7e-4a68-ab78-93f9300d038c)"
	Dec 12 00:24:32 ingress-addon-legacy-491046 kubelet[1613]: E1212 00:24:32.588504    1613 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gmqrw.179fedd284397a53", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gmqrw", UID:"6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-491046"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15607f022f6fa53, ext:94549090724, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15607f022f6fa53, ext:94549090724, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gmqrw.179fedd284397a53" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 00:24:32 ingress-addon-legacy-491046 kubelet[1613]: E1212 00:24:32.626518    1613 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gmqrw.179fedd284397a53", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gmqrw", UID:"6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-491046"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15607f022f6fa53, ext:94549090724, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15607f024b50f9f, ext:94578325233, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gmqrw.179fedd284397a53" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 00:24:34 ingress-addon-legacy-491046 kubelet[1613]: W1212 00:24:34.894645    1613 pod_container_deletor.go:77] Container "91f0786fd025b1bfb4de9e7059b4782e44b298dbe65001eaf139701a7feaad1a" not found in pod's containers
	Dec 12 00:24:36 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:36.800544    1613 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-trjjk" (UniqueName: "kubernetes.io/secret/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c-ingress-nginx-token-trjjk") pod "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c" (UID: "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c")
	Dec 12 00:24:36 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:36.800602    1613 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c-webhook-cert") pod "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c" (UID: "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c")
	Dec 12 00:24:36 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:36.807097    1613 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c" (UID: "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 00:24:36 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:36.807860    1613 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c-ingress-nginx-token-trjjk" (OuterVolumeSpecName: "ingress-nginx-token-trjjk") pod "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c" (UID: "6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c"). InnerVolumeSpecName "ingress-nginx-token-trjjk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 00:24:36 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:36.900910    1613 reconciler.go:319] Volume detached for volume "ingress-nginx-token-trjjk" (UniqueName: "kubernetes.io/secret/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c-ingress-nginx-token-trjjk") on node "ingress-addon-legacy-491046" DevicePath ""
	Dec 12 00:24:36 ingress-addon-legacy-491046 kubelet[1613]: I1212 00:24:36.900945    1613 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c-webhook-cert") on node "ingress-addon-legacy-491046" DevicePath ""
	Dec 12 00:24:37 ingress-addon-legacy-491046 kubelet[1613]: W1212 00:24:37.609483    1613 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/6e0be507-1b18-4ce1-b1f6-3f1785f6bb0c/volumes" does not exist
	
	* 
	* ==> storage-provisioner [07a1f5b4169b0668df4f545c0aa49694ba5bafb9709af72e7f38fc951e41287c] <==
	* I1212 00:23:17.373854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:23:17.390353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:23:17.390581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:23:17.398886       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:23:17.399274       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-491046_d4299456-9fd0-46ac-ae75-e97ef98eefce!
	I1212 00:23:17.400933       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f35a668-f5a7-4b42-af00-c5687d3f42bd", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-491046_d4299456-9fd0-46ac-ae75-e97ef98eefce became leader
	I1212 00:23:17.500279       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-491046_d4299456-9fd0-46ac-ae75-e97ef98eefce!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-491046 -n ingress-addon-legacy-491046
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-491046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.93s)

                                                
                                    

Test pass (266/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.46
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 12.64
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 22.57
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.1
23 TestDownloadOnly/DeleteAll 0.26
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
26 TestBinaryMirror 0.62
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
32 TestAddons/Setup 142.35
34 TestAddons/parallel/Registry 14.44
36 TestAddons/parallel/InspektorGadget 10.86
37 TestAddons/parallel/MetricsServer 5.83
40 TestAddons/parallel/CSI 64.68
41 TestAddons/parallel/Headlamp 11.56
42 TestAddons/parallel/CloudSpanner 5.66
43 TestAddons/parallel/LocalPath 54.17
44 TestAddons/parallel/NvidiaDevicePlugin 5.82
47 TestAddons/serial/GCPAuth/Namespaces 0.17
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 35.99
50 TestCertExpiration 233.37
52 TestForceSystemdFlag 52.2
53 TestForceSystemdEnv 46.88
54 TestDockerEnvContainerd 45.64
59 TestErrorSpam/setup 33.25
60 TestErrorSpam/start 0.9
61 TestErrorSpam/status 1.13
62 TestErrorSpam/pause 1.94
63 TestErrorSpam/unpause 2.07
64 TestErrorSpam/stop 1.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 62.53
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.63
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.29
76 TestFunctional/serial/CacheCmd/cache/add_local 1.5
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.29
81 TestFunctional/serial/CacheCmd/cache/delete 0.15
82 TestFunctional/serial/MinikubeKubectlCmd 0.16
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
86 TestFunctional/serial/LogsCmd 1.58
90 TestFunctional/parallel/ConfigCmd 0.62
91 TestFunctional/parallel/DashboardCmd 9.66
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.23
98 TestFunctional/parallel/ServiceCmdConnect 9.65
99 TestFunctional/parallel/AddonsCmd 0.23
100 TestFunctional/parallel/PersistentVolumeClaim 99.93
102 TestFunctional/parallel/SSHCmd 0.76
103 TestFunctional/parallel/CpCmd 1.64
105 TestFunctional/parallel/FileSync 0.39
106 TestFunctional/parallel/CertSync 2.39
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 1.42
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.85
122 TestFunctional/parallel/ImageCommands/Setup 2.66
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.89
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 49.29
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
151 TestFunctional/parallel/ProfileCmd/profile_list 0.45
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
153 TestFunctional/parallel/MountCmd/any-port 7.2
154 TestFunctional/parallel/MountCmd/specific-port 2.38
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
156 TestFunctional/delete_addon-resizer_images 0.09
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 89.21
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.03
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.78
169 TestJSONOutput/start/Command 60.73
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.84
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.77
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.84
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.28
194 TestKicCustomNetwork/create_custom_network 43.03
195 TestKicCustomNetwork/use_default_bridge_network 38.37
196 TestKicExistingNetwork 34.88
197 TestKicCustomSubnet 36.55
198 TestKicStaticIP 38.49
199 TestMainNoArgs 0.07
200 TestMinikubeProfile 69.14
203 TestMountStart/serial/StartWithMountFirst 7.3
204 TestMountStart/serial/VerifyMountFirst 0.31
205 TestMountStart/serial/StartWithMountSecond 9
206 TestMountStart/serial/VerifyMountSecond 0.31
207 TestMountStart/serial/DeleteFirst 1.69
208 TestMountStart/serial/VerifyMountPostDelete 0.29
209 TestMountStart/serial/Stop 1.23
210 TestMountStart/serial/RestartStopped 7.72
211 TestMountStart/serial/VerifyMountPostStop 0.3
214 TestMultiNode/serial/FreshStart2Nodes 79.11
215 TestMultiNode/serial/DeployApp2Nodes 10.37
216 TestMultiNode/serial/PingHostFrom2Pods 1.14
217 TestMultiNode/serial/AddNode 18.16
218 TestMultiNode/serial/MultiNodeLabels 0.09
219 TestMultiNode/serial/ProfileList 0.36
220 TestMultiNode/serial/CopyFile 11.82
221 TestMultiNode/serial/StopNode 2.44
222 TestMultiNode/serial/StartAfterStop 12.37
223 TestMultiNode/serial/RestartKeepsNodes 123.53
224 TestMultiNode/serial/DeleteNode 5.15
225 TestMultiNode/serial/StopMultiNode 24.34
226 TestMultiNode/serial/RestartMultiNode 79.68
227 TestMultiNode/serial/ValidateNameConflict 35.04
232 TestPreload 150.92
234 TestScheduledStopUnix 107.91
237 TestInsufficientStorage 13.65
238 TestRunningBinaryUpgrade 96.7
240 TestKubernetesUpgrade 137.41
241 TestMissingContainerUpgrade 173.83
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
245 TestPause/serial/Start 69.63
246 TestNoKubernetes/serial/StartWithK8s 44.47
247 TestNoKubernetes/serial/StartWithStopK8s 16.81
248 TestNoKubernetes/serial/Start 8.95
249 TestPause/serial/SecondStartNoReconfiguration 7.73
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
251 TestNoKubernetes/serial/ProfileList 1.19
252 TestNoKubernetes/serial/Stop 1.32
253 TestNoKubernetes/serial/StartNoArgs 7.43
254 TestPause/serial/Pause 1.02
255 TestPause/serial/VerifyStatus 0.44
256 TestPause/serial/Unpause 0.97
257 TestPause/serial/PauseAgain 1.23
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
259 TestPause/serial/DeletePaused 3.03
260 TestPause/serial/VerifyDeletedResources 0.18
261 TestStoppedBinaryUpgrade/Setup 1.42
262 TestStoppedBinaryUpgrade/Upgrade 92.85
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
278 TestNetworkPlugins/group/false 6.55
283 TestStartStop/group/old-k8s-version/serial/FirstStart 119.15
284 TestStartStop/group/old-k8s-version/serial/DeployApp 10.59
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
286 TestStartStop/group/old-k8s-version/serial/Stop 12.28
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
288 TestStartStop/group/old-k8s-version/serial/SecondStart 660.65
290 TestStartStop/group/no-preload/serial/FirstStart 88.91
291 TestStartStop/group/no-preload/serial/DeployApp 10.12
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
293 TestStartStop/group/no-preload/serial/Stop 12.2
294 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
295 TestStartStop/group/no-preload/serial/SecondStart 342.97
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
299 TestStartStop/group/no-preload/serial/Pause 3.51
301 TestStartStop/group/embed-certs/serial/FirstStart 62.73
302 TestStartStop/group/embed-certs/serial/DeployApp 8.52
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
304 TestStartStop/group/embed-certs/serial/Stop 12.2
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
306 TestStartStop/group/embed-certs/serial/SecondStart 343.78
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
310 TestStartStop/group/old-k8s-version/serial/Pause 3.58
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.96
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.33
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.31
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 343.12
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.04
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
321 TestStartStop/group/embed-certs/serial/Pause 3.62
323 TestStartStop/group/newest-cni/serial/FirstStart 48.15
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
326 TestStartStop/group/newest-cni/serial/Stop 1.28
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
328 TestStartStop/group/newest-cni/serial/SecondStart 32.41
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
332 TestStartStop/group/newest-cni/serial/Pause 3.47
333 TestNetworkPlugins/group/auto/Start 87.8
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.03
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.38
338 TestNetworkPlugins/group/auto/KubeletFlags 0.59
339 TestNetworkPlugins/group/auto/NetCatPod 10.63
340 TestNetworkPlugins/group/kindnet/Start 84.94
341 TestNetworkPlugins/group/auto/DNS 0.37
342 TestNetworkPlugins/group/auto/Localhost 0.4
343 TestNetworkPlugins/group/auto/HairPin 0.41
344 TestNetworkPlugins/group/calico/Start 75.4
345 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.52
347 TestNetworkPlugins/group/kindnet/NetCatPod 9.55
348 TestNetworkPlugins/group/kindnet/DNS 0.25
349 TestNetworkPlugins/group/kindnet/Localhost 0.19
350 TestNetworkPlugins/group/kindnet/HairPin 0.2
351 TestNetworkPlugins/group/calico/ControllerPod 5.06
352 TestNetworkPlugins/group/calico/KubeletFlags 0.46
353 TestNetworkPlugins/group/calico/NetCatPod 11.53
354 TestNetworkPlugins/group/custom-flannel/Start 69.58
355 TestNetworkPlugins/group/calico/DNS 0.42
356 TestNetworkPlugins/group/calico/Localhost 0.24
357 TestNetworkPlugins/group/calico/HairPin 0.24
358 TestNetworkPlugins/group/enable-default-cni/Start 76.31
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.38
361 TestNetworkPlugins/group/custom-flannel/DNS 0.19
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
364 TestNetworkPlugins/group/flannel/Start 64.35
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.48
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.5
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
370 TestNetworkPlugins/group/bridge/Start 92.38
371 TestNetworkPlugins/group/flannel/ControllerPod 5.03
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
373 TestNetworkPlugins/group/flannel/NetCatPod 10.51
374 TestNetworkPlugins/group/flannel/DNS 0.24
375 TestNetworkPlugins/group/flannel/Localhost 0.25
376 TestNetworkPlugins/group/flannel/HairPin 0.19
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
378 TestNetworkPlugins/group/bridge/NetCatPod 9.33
379 TestNetworkPlugins/group/bridge/DNS 0.19
380 TestNetworkPlugins/group/bridge/Localhost 0.17
381 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (14.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-570176 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-570176 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.459667626s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-570176
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-570176: exit status 85 (95.249002ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-570176 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-570176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:10:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:10:58.100148 1141286 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:10:58.100306 1141286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:58.100317 1141286 out.go:309] Setting ErrFile to fd 2...
	I1212 00:10:58.100323 1141286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:58.100610 1141286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	W1212 00:10:58.100755 1141286 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-1135857/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-1135857/.minikube/config/config.json: no such file or directory
	I1212 00:10:58.101196 1141286 out.go:303] Setting JSON to true
	I1212 00:10:58.102035 1141286 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24805,"bootTime":1702315053,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:10:58.102105 1141286 start.go:138] virtualization:  
	I1212 00:10:58.105918 1141286 out.go:97] [download-only-570176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:10:58.108375 1141286 out.go:169] MINIKUBE_LOCATION=17764
	W1212 00:10:58.106078 1141286 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 00:10:58.106128 1141286 notify.go:220] Checking for updates...
	I1212 00:10:58.113224 1141286 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:10:58.115191 1141286 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:10:58.117375 1141286 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:10:58.119462 1141286 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:10:58.123959 1141286 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:10:58.124294 1141286 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:10:58.156199 1141286 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:10:58.156303 1141286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:58.243653 1141286 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-12 00:10:58.232661273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:10:58.243761 1141286 docker.go:295] overlay module found
	I1212 00:10:58.246322 1141286 out.go:97] Using the docker driver based on user configuration
	I1212 00:10:58.246352 1141286 start.go:298] selected driver: docker
	I1212 00:10:58.246359 1141286 start.go:902] validating driver "docker" against <nil>
	I1212 00:10:58.246457 1141286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:58.313279 1141286 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-12 00:10:58.303109709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:10:58.313443 1141286 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:10:58.313719 1141286 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1212 00:10:58.313869 1141286 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:10:58.316758 1141286 out.go:169] Using Docker driver with root privileges
	I1212 00:10:58.318797 1141286 cni.go:84] Creating CNI manager for ""
	I1212 00:10:58.318816 1141286 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:10:58.318838 1141286 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:10:58.318854 1141286 start_flags.go:323] config:
	{Name:download-only-570176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-570176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:10:58.321117 1141286 out.go:97] Starting control plane node download-only-570176 in cluster download-only-570176
	I1212 00:10:58.321142 1141286 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:10:58.323336 1141286 out.go:97] Pulling base image ...
	I1212 00:10:58.323360 1141286 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1212 00:10:58.323532 1141286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:10:58.341297 1141286 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:10:58.342025 1141286 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:10:58.342125 1141286 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:10:58.398271 1141286 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:10:58.398296 1141286 cache.go:56] Caching tarball of preloaded images
	I1212 00:10:58.398453 1141286 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1212 00:10:58.401609 1141286 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 00:10:58.401630 1141286 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:10:58.519821 1141286 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:11:04.088131 1141286 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-570176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-570176 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-570176 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.638039365s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-570176
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-570176: exit status 85 (90.816295ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-570176 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-570176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-570176 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-570176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:12.659217 1141363 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:12.659518 1141363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:12.659546 1141363 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:12.659566 1141363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:12.659918 1141363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	W1212 00:11:12.660072 1141363 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-1135857/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-1135857/.minikube/config/config.json: no such file or directory
	I1212 00:11:12.660404 1141363 out.go:303] Setting JSON to true
	I1212 00:11:12.661306 1141363 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24820,"bootTime":1702315053,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:11:12.661405 1141363 start.go:138] virtualization:  
	I1212 00:11:12.664076 1141363 out.go:97] [download-only-570176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:12.666754 1141363 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:11:12.664355 1141363 notify.go:220] Checking for updates...
	I1212 00:11:12.668738 1141363 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:12.671079 1141363 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:11:12.673709 1141363 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:11:12.675515 1141363 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:11:12.679884 1141363 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:11:12.680425 1141363 config.go:182] Loaded profile config "download-only-570176": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1212 00:11:12.680479 1141363 start.go:810] api.Load failed for download-only-570176: filestore "download-only-570176": Docker machine "download-only-570176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:12.680583 1141363 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 00:11:12.680611 1141363 start.go:810] api.Load failed for download-only-570176: filestore "download-only-570176": Docker machine "download-only-570176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:12.707532 1141363 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:12.707628 1141363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:12.786786 1141363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-12 00:11:12.776797538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:12.786903 1141363 docker.go:295] overlay module found
	I1212 00:11:12.789050 1141363 out.go:97] Using the docker driver based on existing profile
	I1212 00:11:12.789093 1141363 start.go:298] selected driver: docker
	I1212 00:11:12.789100 1141363 start.go:902] validating driver "docker" against &{Name:download-only-570176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-570176 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:12.789281 1141363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:12.854727 1141363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-12 00:11:12.845318398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:12.855203 1141363 cni.go:84] Creating CNI manager for ""
	I1212 00:11:12.855223 1141363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:11:12.855235 1141363 start_flags.go:323] config:
	{Name:download-only-570176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-570176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s GPUs:}
	I1212 00:11:12.857600 1141363 out.go:97] Starting control plane node download-only-570176 in cluster download-only-570176
	I1212 00:11:12.857625 1141363 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:11:12.859513 1141363 out.go:97] Pulling base image ...
	I1212 00:11:12.859543 1141363 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:11:12.859717 1141363 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:12.876729 1141363 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:12.876850 1141363 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:12.876884 1141363 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:12.876894 1141363 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:12.876902 1141363 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:12.933971 1141363 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1212 00:11:12.934015 1141363 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:12.934179 1141363 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1212 00:11:12.936996 1141363 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 00:11:12.937025 1141363 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:11:13.048576 1141363 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-570176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (22.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-570176 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-570176 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (22.569292686s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (22.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-570176
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-570176: exit status 85 (95.437875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-570176 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-570176           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-570176 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-570176           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-570176 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-570176           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:25.388496 1141443 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:25.388647 1141443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:25.388655 1141443 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:25.388661 1141443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:25.388938 1141443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	W1212 00:11:25.389081 1141443 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-1135857/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-1135857/.minikube/config/config.json: no such file or directory
	I1212 00:11:25.389340 1141443 out.go:303] Setting JSON to true
	I1212 00:11:25.390151 1141443 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24833,"bootTime":1702315053,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:11:25.390273 1141443 start.go:138] virtualization:  
	I1212 00:11:25.393215 1141443 out.go:97] [download-only-570176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:25.395360 1141443 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:11:25.393461 1141443 notify.go:220] Checking for updates...
	I1212 00:11:25.397944 1141443 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:25.400331 1141443 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:11:25.402380 1141443 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:11:25.405445 1141443 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:11:25.409527 1141443 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:11:25.410063 1141443 config.go:182] Loaded profile config "download-only-570176": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	W1212 00:11:25.410118 1141443 start.go:810] api.Load failed for download-only-570176: filestore "download-only-570176": Docker machine "download-only-570176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:25.410267 1141443 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 00:11:25.410299 1141443 start.go:810] api.Load failed for download-only-570176: filestore "download-only-570176": Docker machine "download-only-570176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:25.434031 1141443 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:25.434121 1141443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:25.519213 1141443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-12 00:11:25.509629901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:25.519348 1141443 docker.go:295] overlay module found
	I1212 00:11:25.522009 1141443 out.go:97] Using the docker driver based on existing profile
	I1212 00:11:25.522040 1141443 start.go:298] selected driver: docker
	I1212 00:11:25.522053 1141443 start.go:902] validating driver "docker" against &{Name:download-only-570176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-570176 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:25.522218 1141443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:25.589117 1141443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-12 00:11:25.579155481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:25.589566 1141443 cni.go:84] Creating CNI manager for ""
	I1212 00:11:25.589584 1141443 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:11:25.589598 1141443 start_flags.go:323] config:
	{Name:download-only-570176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-570176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPause
Interval:1m0s GPUs:}
	I1212 00:11:25.592128 1141443 out.go:97] Starting control plane node download-only-570176 in cluster download-only-570176
	I1212 00:11:25.592156 1141443 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1212 00:11:25.594414 1141443 out.go:97] Pulling base image ...
	I1212 00:11:25.594442 1141443 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1212 00:11:25.594625 1141443 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:25.611523 1141443 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:25.611652 1141443 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:25.611688 1141443 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:25.611698 1141443 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:25.611706 1141443 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:25.672093 1141443 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I1212 00:11:25.672119 1141443 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:25.672270 1141443 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1212 00:11:25.674773 1141443 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 00:11:25.674808 1141443 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:11:25.785253 1141443 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:92a38fc4a732b87afca6a73d8e45a50f -> /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I1212 00:11:38.191050 1141443 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:11:38.191152 1141443 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I1212 00:11:39.065528 1141443 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I1212 00:11:39.065664 1141443 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/download-only-570176/config.json ...
	I1212 00:11:39.065896 1141443 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1212 00:11:39.066112 1141443 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17764-1135857/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-570176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-570176
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-472349 --alsologtostderr --binary-mirror http://127.0.0.1:36139 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-472349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-472349
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-004867
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-004867: exit status 85 (109.35229ms)

                                                
                                                
-- stdout --
	* Profile "addons-004867" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-004867"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-004867
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-004867: exit status 85 (111.246512ms)

                                                
                                                
-- stdout --
	* Profile "addons-004867" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-004867"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (142.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-004867 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-004867 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.350990571s)
--- PASS: TestAddons/Setup (142.35s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 32.950345ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-mb5bh" [e49fc75b-bdf3-4bc7-974c-ad2b60ad2aa7] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020777682s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v6fdt" [9b09d19d-de70-4521-800a-de11772f56c7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015774664s
addons_test.go:339: (dbg) Run:  kubectl --context addons-004867 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-004867 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-004867 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.235411749s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 ip
2023/12/12 00:14:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7lpcm" [b3e8aee8-040d-419f-90ec-02375dca0ab4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011883042s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-004867
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-004867: (5.849892575s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.015563ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-q52pq" [11e8547a-29fb-4d97-8ce9-2f39d348f2b0] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012117093s
addons_test.go:414: (dbg) Run:  kubectl --context addons-004867 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 34.065931ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-004867 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-004867 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [37408eec-3743-43ee-bcc8-53130f682407] Pending
helpers_test.go:344: "task-pv-pod" [37408eec-3743-43ee-bcc8-53130f682407] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [37408eec-3743-43ee-bcc8-53130f682407] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.010539911s
addons_test.go:583: (dbg) Run:  kubectl --context addons-004867 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-004867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-004867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-004867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-004867 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-004867 delete pod task-pv-pod: (1.410346538s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-004867 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-004867 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-004867 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6b2d9bf9-2b60-41bb-b74c-2c6038c3fb42] Pending
helpers_test.go:344: "task-pv-pod-restore" [6b2d9bf9-2b60-41bb-b74c-2c6038c3fb42] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6b2d9bf9-2b60-41bb-b74c-2c6038c3fb42] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.035117173s
addons_test.go:625: (dbg) Run:  kubectl --context addons-004867 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-004867 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-004867 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-004867 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.844767213s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-004867 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-004867 --alsologtostderr -v=1: (1.536456076s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-jg2nq" [318a3d06-b421-4453-89c8-e6a13cd8fa34] Pending
helpers_test.go:344: "headlamp-777fd4b855-jg2nq" [318a3d06-b421-4453-89c8-e6a13cd8fa34] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-jg2nq" [318a3d06-b421-4453-89c8-e6a13cd8fa34] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.026231193s
--- PASS: TestAddons/parallel/Headlamp (11.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-mvsql" [20605c94-17b8-4907-b0f9-98caafd5a9ac] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011169378s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-004867
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-004867 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-004867 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [66d19f95-95e0-4430-8ade-0d37ddf0b1ca] Pending
helpers_test.go:344: "test-local-path" [66d19f95-95e0-4430-8ade-0d37ddf0b1ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [66d19f95-95e0-4430-8ade-0d37ddf0b1ca] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [66d19f95-95e0-4430-8ade-0d37ddf0b1ca] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010852929s
addons_test.go:890: (dbg) Run:  kubectl --context addons-004867 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 ssh "cat /opt/local-path-provisioner/pvc-709bb9e2-1272-4f29-8b35-92ea026ee6d1_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-004867 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-004867 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-004867 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-004867 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.423429499s)
--- PASS: TestAddons/parallel/LocalPath (54.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jw7m9" [060065fa-bb93-4bde-a940-e2f0d2d797f4] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.015036565s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-004867
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-004867 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-004867 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-004867
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-004867: (12.132920074s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-004867
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-004867
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-004867
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (35.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-527120 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1212 00:49:12.325398 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-527120 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.071938702s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-527120 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-527120 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-527120 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-527120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-527120
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-527120: (2.123380909s)
--- PASS: TestCertOptions (35.99s)

                                                
                                    
x
+
TestCertExpiration (233.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-535767 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1212 00:48:45.780271 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-535767 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.130432517s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-535767 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-535767 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.67322858s)
helpers_test.go:175: Cleaning up "cert-expiration-535767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-535767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-535767: (2.564066378s)
--- PASS: TestCertExpiration (233.37s)

                                                
                                    
x
+
TestForceSystemdFlag (52.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-162215 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-162215 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (49.569762286s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-162215 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-162215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-162215
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-162215: (2.220874198s)
--- PASS: TestForceSystemdFlag (52.20s)

                                                
                                    
x
+
TestForceSystemdEnv (46.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-045870 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-045870 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.34297457s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-045870 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-045870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-045870
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-045870: (2.201966228s)
--- PASS: TestForceSystemdEnv (46.88s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.64s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-050234 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-050234 --driver=docker  --container-runtime=containerd: (29.486869312s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-050234"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-050234": (1.517643301s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6P0YbCUnFdYR/agent.1158355" SSH_AGENT_PID="1158356" DOCKER_HOST=ssh://docker@127.0.0.1:34033 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6P0YbCUnFdYR/agent.1158355" SSH_AGENT_PID="1158356" DOCKER_HOST=ssh://docker@127.0.0.1:34033 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6P0YbCUnFdYR/agent.1158355" SSH_AGENT_PID="1158356" DOCKER_HOST=ssh://docker@127.0.0.1:34033 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.210690403s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6P0YbCUnFdYR/agent.1158355" SSH_AGENT_PID="1158356" DOCKER_HOST=ssh://docker@127.0.0.1:34033 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-050234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-050234
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-050234: (2.008704526s)
--- PASS: TestDockerEnvContainerd (45.64s)

                                                
                                    
x
+
TestErrorSpam/setup (33.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-111319 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-111319 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-111319 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-111319 --driver=docker  --container-runtime=containerd: (33.248903447s)
--- PASS: TestErrorSpam/setup (33.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.07s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 unpause
--- PASS: TestErrorSpam/unpause (2.07s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 stop: (1.234415831s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-111319 --log_dir /tmp/nospam-111319 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17764-1135857/.minikube/files/etc/test/nested/copy/1141281/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-204186 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1212 00:19:12.324681 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.331871 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.342136 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.362610 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.402883 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.483235 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.643787 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:12.964294 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:13.604881 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:14.885353 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:19:17.446221 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-204186 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m2.532299375s)
--- PASS: TestFunctional/serial/StartWithProxy (62.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-204186 --alsologtostderr -v=8
E1212 00:19:22.567422 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-204186 --alsologtostderr -v=8: (6.633312538s)
functional_test.go:659: soft start took 6.63467568s for "functional-204186" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-204186 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 cache add registry.k8s.io/pause:3.1: (1.649064269s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 cache add registry.k8s.io/pause:3.3: (1.375663004s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cache add registry.k8s.io/pause:latest
E1212 00:19:32.807709 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 cache add registry.k8s.io/pause:latest: (1.263951867s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-204186 /tmp/TestFunctionalserialCacheCmdcacheadd_local416682349/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cache add minikube-local-cache-test:functional-204186
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 cache add minikube-local-cache-test:functional-204186: (1.005519713s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cache delete minikube-local-cache-test:functional-204186
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-204186
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (335.700967ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 cache reload: (1.227243565s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 kubectl -- --context functional-204186 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-204186 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 logs: (1.582267883s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 config get cpus: exit status 14 (105.435579ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 config get cpus: exit status 14 (93.289857ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-204186 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-204186 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1174130: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-204186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-204186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (208.664448ms)

                                                
                                                
-- stdout --
	* [functional-204186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:21:34.341795 1173871 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:21:34.341997 1173871 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:34.342008 1173871 out.go:309] Setting ErrFile to fd 2...
	I1212 00:21:34.342015 1173871 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:34.342381 1173871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:21:34.342803 1173871 out.go:303] Setting JSON to false
	I1212 00:21:34.343862 1173871 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25442,"bootTime":1702315053,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:21:34.343935 1173871 start.go:138] virtualization:  
	I1212 00:21:34.346648 1173871 out.go:177] * [functional-204186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:21:34.348705 1173871 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:21:34.348856 1173871 notify.go:220] Checking for updates...
	I1212 00:21:34.350597 1173871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:34.353539 1173871 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:21:34.355688 1173871 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:21:34.357628 1173871 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:34.359472 1173871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:34.361767 1173871 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:21:34.362325 1173871 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:21:34.386604 1173871 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:21:34.386732 1173871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:34.473452 1173871 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 00:21:34.462255147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:21:34.473559 1173871 docker.go:295] overlay module found
	I1212 00:21:34.475743 1173871 out.go:177] * Using the docker driver based on existing profile
	I1212 00:21:34.477896 1173871 start.go:298] selected driver: docker
	I1212 00:21:34.477914 1173871 start.go:902] validating driver "docker" against &{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:21:34.478018 1173871 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:34.480535 1173871 out.go:177] 
	W1212 00:21:34.482939 1173871 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:21:34.484800 1173871 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-204186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-204186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-204186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (224.165708ms)

                                                
                                                
-- stdout --
	* [functional-204186] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:21:34.826905 1173975 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:21:34.827053 1173975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:34.827063 1173975 out.go:309] Setting ErrFile to fd 2...
	I1212 00:21:34.827069 1173975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:34.827484 1173975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:21:34.827875 1173975 out.go:303] Setting JSON to false
	I1212 00:21:34.828828 1173975 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25442,"bootTime":1702315053,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:21:34.828940 1173975 start.go:138] virtualization:  
	I1212 00:21:34.831525 1173975 out.go:177] * [functional-204186] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1212 00:21:34.834218 1173975 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:21:34.836144 1173975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:34.834403 1173975 notify.go:220] Checking for updates...
	I1212 00:21:34.839914 1173975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:21:34.841945 1173975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:21:34.843969 1173975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:34.846136 1173975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:34.848812 1173975 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:21:34.849403 1173975 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:21:34.874996 1173975 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:21:34.875097 1173975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:34.966393 1173975 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 00:21:34.956842346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:21:34.966497 1173975 docker.go:295] overlay module found
	I1212 00:21:34.968863 1173975 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:21:34.970730 1173975 start.go:298] selected driver: docker
	I1212 00:21:34.970751 1173975 start.go:902] validating driver "docker" against &{Name:functional-204186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-204186 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:21:34.970873 1173975 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:34.973855 1173975 out.go:177] 
	W1212 00:21:34.976136 1173975 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:21:34.978293 1173975 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-204186 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-204186 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-r5k44" [ac6a1f84-ee0a-4ddc-abb9-e4ee53db803d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-r5k44" [ac6a1f84-ee0a-4ddc-abb9-e4ee53db803d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.011289426s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30749
functional_test.go:1674: http://192.168.49.2:30749: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-r5k44

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30749
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (99.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f4424a2e-f114-46c8-9059-3ddd8cab9386] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011653732s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-204186 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-204186 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
E1212 00:20:34.249982 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204186 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-204186 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [76bcd40f-d8d1-425c-8b16-1a6bb4df6c63] Pending
helpers_test.go:344: "sp-pod" [76bcd40f-d8d1-425c-8b16-1a6bb4df6c63] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2023/12/12 00:21:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [76bcd40f-d8d1-425c-8b16-1a6bb4df6c63] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.021571985s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-204186 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-204186 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-204186 delete -f testdata/storage-provisioner/pod.yaml: (1.291007863s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-204186 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6e048e4c-78c9-478b-97aa-8023736c3d29] Pending
helpers_test.go:344: "sp-pod" [6e048e4c-78c9-478b-97aa-8023736c3d29] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1212 00:21:56.170942 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [6e048e4c-78c9-478b-97aa-8023736c3d29] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.016594528s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-204186 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (99.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh -n functional-204186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 cp functional-204186:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd87923566/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh -n functional-204186 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1141281/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /etc/test/nested/copy/1141281/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1141281.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /etc/ssl/certs/1141281.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1141281.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /usr/share/ca-certificates/1141281.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11412812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /etc/ssl/certs/11412812.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11412812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /usr/share/ca-certificates/11412812.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh "sudo systemctl is-active docker": exit status 1 (434.367239ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh "sudo systemctl is-active crio": exit status 1 (384.714434ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 version -o=json --components: (1.419806934s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-204186 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-204186
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-204186 image ls --format short --alsologtostderr:
I1212 00:21:47.086516 1174647 out.go:296] Setting OutFile to fd 1 ...
I1212 00:21:47.086730 1174647 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:47.086759 1174647 out.go:309] Setting ErrFile to fd 2...
I1212 00:21:47.086780 1174647 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:47.087059 1174647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
I1212 00:21:47.087831 1174647 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:47.087988 1174647 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:47.088593 1174647 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
I1212 00:21:47.111249 1174647 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:47.111299 1174647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
I1212 00:21:47.134442 1174647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
I1212 00:21:47.234304 1174647 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-204186 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:5628e5 | 67.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-204186  | sha256:9c78c2 | 1.01kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| docker.io/library/nginx                     | alpine             | sha256:f09fc9 | 17.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| localhost/my-image                          | functional-204186  | sha256:e0a8be | 831kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-204186 image ls --format table --alsologtostderr:
I1212 00:21:51.757900 1175032 out.go:296] Setting OutFile to fd 1 ...
I1212 00:21:51.758047 1175032 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:51.758059 1175032 out.go:309] Setting ErrFile to fd 2...
I1212 00:21:51.758066 1175032 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:51.758329 1175032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
I1212 00:21:51.758995 1175032 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:51.759133 1175032 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:51.759652 1175032 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
I1212 00:21:51.778387 1175032 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:51.778443 1175032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
I1212 00:21:51.801281 1175032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
I1212 00:21:51.901605 1175032 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-204186 image ls --format json --alsologtostderr:
[{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02
d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:f09fc93534f6a80e1cb
9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":["docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17606180"},{"id":"sha256:5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"67241575"},{"id":"sha256:e0a8be447d1b11d6ae3014d67f28974aec0a6a49c06605af38d090b935d0d630","repoDigests":[],"repoTags":["localhost/my-image:functional-204186"],"size":"830631"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registr
y.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:9c78c2ece32fe1f09bd5b36dfeaf46e0bdb53aad389db53255b59543e914b652","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-204186"],"size":"1007"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"s
ize":"18306114"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-204186 image ls --format json --alsologtostderr:
I1212 00:21:51.474440 1175004 out.go:296] Setting OutFile to fd 1 ...
I1212 00:21:51.474630 1175004 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:51.474655 1175004 out.go:309] Setting ErrFile to fd 2...
I1212 00:21:51.474673 1175004 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:51.475001 1175004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
I1212 00:21:51.475719 1175004 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:51.475928 1175004 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:51.476454 1175004 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
I1212 00:21:51.507776 1175004 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:51.507831 1175004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
I1212 00:21:51.535253 1175004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
I1212 00:21:51.645178 1175004 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-204186 image ls --format yaml --alsologtostderr:
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:9c78c2ece32fe1f09bd5b36dfeaf46e0bdb53aad389db53255b59543e914b652
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-204186
size: "1007"
- id: sha256:f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests:
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
repoTags:
- docker.io/library/nginx:alpine
size: "17606180"
- id: sha256:5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
repoTags:
- docker.io/library/nginx:latest
size: "67241575"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:e0a8be447d1b11d6ae3014d67f28974aec0a6a49c06605af38d090b935d0d630
repoDigests: []
repoTags:
- localhost/my-image:functional-204186
size: "830631"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-204186 image ls --format yaml --alsologtostderr:
I1212 00:21:51.218972 1174976 out.go:296] Setting OutFile to fd 1 ...
I1212 00:21:51.219129 1174976 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:51.219139 1174976 out.go:309] Setting ErrFile to fd 2...
I1212 00:21:51.219145 1174976 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:51.219470 1174976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
I1212 00:21:51.220132 1174976 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:51.220271 1174976 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:51.220938 1174976 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
I1212 00:21:51.239608 1174976 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:51.239671 1174976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
I1212 00:21:51.257353 1174976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
I1212 00:21:51.357076 1174976 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh pgrep buildkitd: exit status 1 (354.479692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image build -t localhost/my-image:functional-204186 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-204186 image build -t localhost/my-image:functional-204186 testdata/build --alsologtostderr: (3.224145901s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-204186 image build -t localhost/my-image:functional-204186 testdata/build --alsologtostderr:
I1212 00:21:47.725427 1174796 out.go:296] Setting OutFile to fd 1 ...
I1212 00:21:47.726194 1174796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:47.726231 1174796 out.go:309] Setting ErrFile to fd 2...
I1212 00:21:47.726256 1174796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:47.726561 1174796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
I1212 00:21:47.727265 1174796 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:47.727990 1174796 config.go:182] Loaded profile config "functional-204186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1212 00:21:47.728536 1174796 cli_runner.go:164] Run: docker container inspect functional-204186 --format={{.State.Status}}
I1212 00:21:47.748110 1174796 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:47.748166 1174796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-204186
I1212 00:21:47.771521 1174796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/functional-204186/id_rsa Username:docker}
I1212 00:21:47.873437 1174796 build_images.go:151] Building image from path: /tmp/build.1948241825.tar
I1212 00:21:47.873504 1174796 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:21:47.884766 1174796 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1948241825.tar
I1212 00:21:47.889812 1174796 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1948241825.tar: stat -c "%s %y" /var/lib/minikube/build/build.1948241825.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1948241825.tar': No such file or directory
I1212 00:21:47.889847 1174796 ssh_runner.go:362] scp /tmp/build.1948241825.tar --> /var/lib/minikube/build/build.1948241825.tar (3072 bytes)
I1212 00:21:47.923100 1174796 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1948241825
I1212 00:21:47.934233 1174796 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1948241825 -xf /var/lib/minikube/build/build.1948241825.tar
I1212 00:21:47.946040 1174796 containerd.go:378] Building image: /var/lib/minikube/build/build.1948241825
I1212 00:21:47.946120 1174796 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1948241825 --local dockerfile=/var/lib/minikube/build/build.1948241825 --output type=image,name=localhost/my-image:functional-204186
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e596e50d322ee93a12b602d9d24ee416e8b93ea8d8e8777fdda62b5ac0b18123 0.0s done
#8 exporting config sha256:e0a8be447d1b11d6ae3014d67f28974aec0a6a49c06605af38d090b935d0d630 0.0s done
#8 naming to localhost/my-image:functional-204186 done
#8 DONE 0.1s
I1212 00:21:50.851895 1174796 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1948241825 --local dockerfile=/var/lib/minikube/build/build.1948241825 --output type=image,name=localhost/my-image:functional-204186: (2.905745881s)
I1212 00:21:50.851958 1174796 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1948241825
I1212 00:21:50.863135 1174796 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1948241825.tar
I1212 00:21:50.874471 1174796 build_images.go:207] Built localhost/my-image:functional-204186 from /tmp/build.1948241825.tar
I1212 00:21:50.874503 1174796 build_images.go:123] succeeded building to: functional-204186
I1212 00:21:50.874509 1174796 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.613312376s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-204186
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-204186 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-204186 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-204186 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1171273: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-204186 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-204186 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (49.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-204186 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-204186 apply -f testdata/testsvc.yaml: (1.259809225s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f508d7d3-1871-46ec-bed2-94e5580d7513] Pending
helpers_test.go:344: "nginx-svc" [f508d7d3-1871-46ec-bed2-94e5580d7513] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f508d7d3-1871-46ec-bed2-94e5580d7513] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 48.029741134s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (49.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image rm gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-204186
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 image save --daemon gcr.io/google-containers/addon-resizer:functional-204186 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-204186
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-204186 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.74.38 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-204186 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "363.346263ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "83.632471ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "372.799344ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "76.858553ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdany-port316775472/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702340482442102192" to /tmp/TestFunctionalparallelMountCmdany-port316775472/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702340482442102192" to /tmp/TestFunctionalparallelMountCmdany-port316775472/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702340482442102192" to /tmp/TestFunctionalparallelMountCmdany-port316775472/001/test-1702340482442102192
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (433.389691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:21 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:21 test-1702340482442102192
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh cat /mount-9p/test-1702340482442102192
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-204186 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [61f3aefb-f4cd-4ccc-9135-e7b747feb1f1] Pending
helpers_test.go:344: "busybox-mount" [61f3aefb-f4cd-4ccc-9135-e7b747feb1f1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [61f3aefb-f4cd-4ccc-9135-e7b747feb1f1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [61f3aefb-f4cd-4ccc-9135-e7b747feb1f1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.025642907s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-204186 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdany-port316775472/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdspecific-port316731646/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (449.556523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdspecific-port316731646/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh "sudo umount -f /mount-9p": exit status 1 (300.931576ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-204186 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdspecific-port316731646/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup701036946/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup701036946/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup701036946/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T" /mount1: exit status 1 (752.324747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-204186 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-204186 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup701036946/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup701036946/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-204186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup701036946/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-204186
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-204186
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-204186
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (89.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-491046 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-491046 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m29.211919843s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (89.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons enable ingress --alsologtostderr -v=5: (10.028012431s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-491046 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-696268 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1212 00:25:20.470700 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:20.475976 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:20.486405 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:20.506633 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:20.546876 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:20.627141 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:20.787535 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:21.108121 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:21.748954 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:23.029259 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:25.590549 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:30.711011 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:25:40.951349 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-696268 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m0.728667024s)
--- PASS: TestJSONOutput/start/Command (60.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-696268 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-696268 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-696268 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-696268 --output=json --user=testUser: (5.839880823s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-003229 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-003229 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.702112ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a05cced6-03ab-4e2b-ba87-d3575d373268","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-003229] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9703b696-3c23-4177-b19e-fa156d29c8fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17764"}}
	{"specversion":"1.0","id":"584e52d8-2cc3-4927-9288-ac3e11f2be54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b9870ca-3dcc-4a98-9728-def842af470c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig"}}
	{"specversion":"1.0","id":"6f45b59b-61cf-4193-8268-0d59663f9db9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube"}}
	{"specversion":"1.0","id":"5ebf2113-0f99-401b-8001-10768fcba101","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"339c2ae7-ca04-4da7-b789-ed0eeaa80323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"71a75f79-065e-4c0b-b92d-da7b90b2a246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-003229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-003229
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-599286 --network=
E1212 00:26:01.431561 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-599286 --network=: (40.814930448s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-599286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-599286
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-599286: (2.188421566s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-008475 --network=bridge
E1212 00:26:42.391805 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-008475 --network=bridge: (36.257127216s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-008475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-008475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-008475: (2.085406559s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.37s)

                                                
                                    
x
+
TestKicExistingNetwork (34.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-177637 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-177637 --network=existing-network: (32.750995901s)
helpers_test.go:175: Cleaning up "existing-network-177637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-177637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-177637: (1.967292009s)
--- PASS: TestKicExistingNetwork (34.88s)

                                                
                                    
x
+
TestKicCustomSubnet (36.55s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-549495 --subnet=192.168.60.0/24
E1212 00:28:04.312006 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-549495 --subnet=192.168.60.0/24: (34.364954018s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-549495 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-549495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-549495
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-549495: (2.152464361s)
--- PASS: TestKicCustomSubnet (36.55s)

                                                
                                    
x
+
TestKicStaticIP (38.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-553190 --static-ip=192.168.200.200
E1212 00:28:45.779664 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:45.784919 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:45.795197 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:45.815481 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:45.855813 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:45.936058 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:46.096405 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:46.416904 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:47.057207 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:48.338110 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:50.898856 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:28:56.019586 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-553190 --static-ip=192.168.200.200: (36.136811393s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-553190 ip
helpers_test.go:175: Cleaning up "static-ip-553190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-553190
E1212 00:29:06.259861 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-553190: (2.132562117s)
--- PASS: TestKicStaticIP (38.49s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-363818 --driver=docker  --container-runtime=containerd
E1212 00:29:12.324523 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:29:26.740074 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-363818 --driver=docker  --container-runtime=containerd: (30.827862976s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-366411 --driver=docker  --container-runtime=containerd
E1212 00:30:07.700939 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-366411 --driver=docker  --container-runtime=containerd: (32.732177986s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-363818
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-366411
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-366411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-366411
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-366411: (2.018656886s)
helpers_test.go:175: Cleaning up "first-363818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-363818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-363818: (2.243179819s)
--- PASS: TestMinikubeProfile (69.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-001167 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1212 00:30:20.471474 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-001167 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.296874477s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-001167 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-002882 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-002882 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.004335889s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-002882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-001167 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-001167 --alsologtostderr -v=5: (1.685765285s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-002882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-002882
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-002882: (1.231662748s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-002882
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-002882: (6.723619947s)
--- PASS: TestMountStart/serial/RestartStopped (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-002882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-563480 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1212 00:30:48.152457 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:31:29.621693 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-563480 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.549581485s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-563480 -- rollout status deployment/busybox: (3.256868993s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-bsstb -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-htx2h -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-htx2h -- nslookup kubernetes.io: (5.269091282s)
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-bsstb -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-htx2h -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-bsstb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-htx2h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-bsstb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-bsstb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-htx2h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-563480 -- exec busybox-5bc68d56bd-htx2h -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.14s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-563480 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-563480 -v 3 --alsologtostderr: (17.404612603s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-563480 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp testdata/cp-test.txt multinode-563480:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3726137986/001/cp-test_multinode-563480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480:/home/docker/cp-test.txt multinode-563480-m02:/home/docker/cp-test_multinode-563480_multinode-563480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m02 "sudo cat /home/docker/cp-test_multinode-563480_multinode-563480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480:/home/docker/cp-test.txt multinode-563480-m03:/home/docker/cp-test_multinode-563480_multinode-563480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m03 "sudo cat /home/docker/cp-test_multinode-563480_multinode-563480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp testdata/cp-test.txt multinode-563480-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3726137986/001/cp-test_multinode-563480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480-m02:/home/docker/cp-test.txt multinode-563480:/home/docker/cp-test_multinode-563480-m02_multinode-563480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480 "sudo cat /home/docker/cp-test_multinode-563480-m02_multinode-563480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480-m02:/home/docker/cp-test.txt multinode-563480-m03:/home/docker/cp-test_multinode-563480-m02_multinode-563480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m03 "sudo cat /home/docker/cp-test_multinode-563480-m02_multinode-563480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp testdata/cp-test.txt multinode-563480-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3726137986/001/cp-test_multinode-563480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480-m03:/home/docker/cp-test.txt multinode-563480:/home/docker/cp-test_multinode-563480-m03_multinode-563480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480 "sudo cat /home/docker/cp-test_multinode-563480-m03_multinode-563480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 cp multinode-563480-m03:/home/docker/cp-test.txt multinode-563480-m02:/home/docker/cp-test_multinode-563480-m03_multinode-563480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 ssh -n multinode-563480-m02 "sudo cat /home/docker/cp-test_multinode-563480-m03_multinode-563480-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-563480 node stop m03: (1.243982728s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-563480 status: exit status 7 (612.522423ms)

                                                
                                                
-- stdout --
	multinode-563480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563480-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-563480-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr: exit status 7 (580.698885ms)

                                                
                                                
-- stdout --
	multinode-563480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563480-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-563480-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:32:50.523591 1222384 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:32:50.523761 1222384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:50.523772 1222384 out.go:309] Setting ErrFile to fd 2...
	I1212 00:32:50.523778 1222384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:50.524036 1222384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:32:50.524210 1222384 out.go:303] Setting JSON to false
	I1212 00:32:50.524261 1222384 mustload.go:65] Loading cluster: multinode-563480
	I1212 00:32:50.524384 1222384 notify.go:220] Checking for updates...
	I1212 00:32:50.524720 1222384 config.go:182] Loaded profile config "multinode-563480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:32:50.524737 1222384 status.go:255] checking status of multinode-563480 ...
	I1212 00:32:50.525277 1222384 cli_runner.go:164] Run: docker container inspect multinode-563480 --format={{.State.Status}}
	I1212 00:32:50.544883 1222384 status.go:330] multinode-563480 host status = "Running" (err=<nil>)
	I1212 00:32:50.544934 1222384 host.go:66] Checking if "multinode-563480" exists ...
	I1212 00:32:50.545246 1222384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-563480
	I1212 00:32:50.564070 1222384 host.go:66] Checking if "multinode-563480" exists ...
	I1212 00:32:50.564404 1222384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:32:50.564457 1222384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-563480
	I1212 00:32:50.594186 1222384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/multinode-563480/id_rsa Username:docker}
	I1212 00:32:50.694183 1222384 ssh_runner.go:195] Run: systemctl --version
	I1212 00:32:50.700020 1222384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:32:50.714432 1222384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:32:50.781292 1222384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-12 00:32:50.77151614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:32:50.781984 1222384 kubeconfig.go:92] found "multinode-563480" server: "https://192.168.58.2:8443"
	I1212 00:32:50.782009 1222384 api_server.go:166] Checking apiserver status ...
	I1212 00:32:50.782053 1222384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:32:50.795358 1222384 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1212 00:32:50.807007 1222384 api_server.go:182] apiserver freezer: "4:freezer:/docker/6db39e291859cb3b98fb4abc3f218e632e716041d77a7156f7f29de1378b0692/kubepods/burstable/pod311183df157ddc8da8c4e58852a69d58/f07db050a9d3c0ff0466fb11984e0a6452793bb845048ac126ad78ead93cf3a2"
	I1212 00:32:50.807082 1222384 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6db39e291859cb3b98fb4abc3f218e632e716041d77a7156f7f29de1378b0692/kubepods/burstable/pod311183df157ddc8da8c4e58852a69d58/f07db050a9d3c0ff0466fb11984e0a6452793bb845048ac126ad78ead93cf3a2/freezer.state
	I1212 00:32:50.818892 1222384 api_server.go:204] freezer state: "THAWED"
	I1212 00:32:50.818923 1222384 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1212 00:32:50.827945 1222384 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1212 00:32:50.827976 1222384 status.go:421] multinode-563480 apiserver status = Running (err=<nil>)
	I1212 00:32:50.827987 1222384 status.go:257] multinode-563480 status: &{Name:multinode-563480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:32:50.828005 1222384 status.go:255] checking status of multinode-563480-m02 ...
	I1212 00:32:50.828311 1222384 cli_runner.go:164] Run: docker container inspect multinode-563480-m02 --format={{.State.Status}}
	I1212 00:32:50.846471 1222384 status.go:330] multinode-563480-m02 host status = "Running" (err=<nil>)
	I1212 00:32:50.846513 1222384 host.go:66] Checking if "multinode-563480-m02" exists ...
	I1212 00:32:50.846808 1222384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-563480-m02
	I1212 00:32:50.870222 1222384 host.go:66] Checking if "multinode-563480-m02" exists ...
	I1212 00:32:50.870559 1222384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:32:50.870613 1222384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-563480-m02
	I1212 00:32:50.892450 1222384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17764-1135857/.minikube/machines/multinode-563480-m02/id_rsa Username:docker}
	I1212 00:32:50.989686 1222384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:32:51.006427 1222384 status.go:257] multinode-563480-m02 status: &{Name:multinode-563480-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:32:51.006480 1222384 status.go:255] checking status of multinode-563480-m03 ...
	I1212 00:32:51.006904 1222384 cli_runner.go:164] Run: docker container inspect multinode-563480-m03 --format={{.State.Status}}
	I1212 00:32:51.033048 1222384 status.go:330] multinode-563480-m03 host status = "Stopped" (err=<nil>)
	I1212 00:32:51.033087 1222384 status.go:343] host is not running, skipping remaining checks
	I1212 00:32:51.033096 1222384 status.go:257] multinode-563480-m03 status: &{Name:multinode-563480-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-563480 node start m03 --alsologtostderr: (11.490665293s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-563480
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-563480
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-563480: (25.040469554s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-563480 --wait=true -v=8 --alsologtostderr
E1212 00:33:45.779936 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:34:12.325246 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:34:13.461926 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-563480 --wait=true -v=8 --alsologtostderr: (1m38.313364346s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-563480
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-563480 node delete m03: (4.362074886s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 stop
E1212 00:35:20.470609 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:35:35.375658 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-563480 stop: (24.129736215s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-563480 status: exit status 7 (102.05412ms)

                                                
                                                
-- stdout --
	multinode-563480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-563480-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr: exit status 7 (112.640296ms)

                                                
                                                
-- stdout --
	multinode-563480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-563480-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:35:36.379094 1231225 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:35:36.379252 1231225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.379263 1231225 out.go:309] Setting ErrFile to fd 2...
	I1212 00:35:36.379269 1231225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:36.379554 1231225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:35:36.379731 1231225 out.go:303] Setting JSON to false
	I1212 00:35:36.379778 1231225 mustload.go:65] Loading cluster: multinode-563480
	I1212 00:35:36.379860 1231225 notify.go:220] Checking for updates...
	I1212 00:35:36.380210 1231225 config.go:182] Loaded profile config "multinode-563480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:35:36.380222 1231225 status.go:255] checking status of multinode-563480 ...
	I1212 00:35:36.380738 1231225 cli_runner.go:164] Run: docker container inspect multinode-563480 --format={{.State.Status}}
	I1212 00:35:36.408325 1231225 status.go:330] multinode-563480 host status = "Stopped" (err=<nil>)
	I1212 00:35:36.408345 1231225 status.go:343] host is not running, skipping remaining checks
	I1212 00:35:36.408352 1231225 status.go:257] multinode-563480 status: &{Name:multinode-563480 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:35:36.408392 1231225 status.go:255] checking status of multinode-563480-m02 ...
	I1212 00:35:36.408718 1231225 cli_runner.go:164] Run: docker container inspect multinode-563480-m02 --format={{.State.Status}}
	I1212 00:35:36.426504 1231225 status.go:330] multinode-563480-m02 host status = "Stopped" (err=<nil>)
	I1212 00:35:36.426523 1231225 status.go:343] host is not running, skipping remaining checks
	I1212 00:35:36.426530 1231225 status.go:257] multinode-563480-m02 status: &{Name:multinode-563480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-563480 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-563480 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.879328768s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-563480 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-563480
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-563480-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-563480-m02 --driver=docker  --container-runtime=containerd: exit status 14 (114.583857ms)

                                                
                                                
-- stdout --
	* [multinode-563480-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-563480-m02' is duplicated with machine name 'multinode-563480-m02' in profile 'multinode-563480'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-563480-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-563480-m03 --driver=docker  --container-runtime=containerd: (32.262832322s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-563480
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-563480: exit status 80 (573.899321ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-563480
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-563480-m03 already exists in multinode-563480-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-563480-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-563480-m03: (2.015031033s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.04s)

                                                
                                    
x
+
TestPreload (150.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-835728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1212 00:38:45.780123 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-835728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m19.992316566s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-835728 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-835728 image pull gcr.io/k8s-minikube/busybox: (1.314248454s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-835728
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-835728: (12.068164417s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-835728 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1212 00:39:12.324641 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-835728 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.854340672s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-835728 image list
helpers_test.go:175: Cleaning up "test-preload-835728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-835728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-835728: (2.413676407s)
--- PASS: TestPreload (150.92s)

                                                
                                    
x
+
TestScheduledStopUnix (107.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-379012 --memory=2048 --driver=docker  --container-runtime=containerd
E1212 00:40:20.471135 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-379012 --memory=2048 --driver=docker  --container-runtime=containerd: (31.42298879s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-379012 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-379012 -n scheduled-stop-379012
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-379012 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-379012 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-379012 -n scheduled-stop-379012
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-379012
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-379012 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1212 00:41:43.512741 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-379012
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-379012: exit status 7 (88.067198ms)

                                                
                                                
-- stdout --
	scheduled-stop-379012
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-379012 -n scheduled-stop-379012
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-379012 -n scheduled-stop-379012: exit status 7 (83.70643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-379012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-379012
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-379012: (4.539793748s)
--- PASS: TestScheduledStopUnix (107.91s)

                                                
                                    
x
+
TestInsufficientStorage (13.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-464052 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-464052 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.996255032s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"158f1c12-9908-491f-9314-fe8d6ffca218","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-464052] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe2166c3-2331-4d34-9fc9-81a5e860cf20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17764"}}
	{"specversion":"1.0","id":"130b7391-1017-4cd2-beb3-4ef24d149b87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce06ce33-dd2b-4430-8bcd-9c545d089cbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig"}}
	{"specversion":"1.0","id":"91df5dcf-aa16-45ef-88a1-99a35fce437c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube"}}
	{"specversion":"1.0","id":"6022660c-090b-4a3f-9670-788bb7fe5041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"10d7df88-ac4a-46fe-9a76-58b9af5b19fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"202ba5eb-eada-470e-9b38-ef36ea824efc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8cbe4ccf-2e13-486e-a865-64e6f8c9117c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3062de73-dd9b-4d36-a2e8-e3e47803c1cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b9ed38a-f08b-4902-b0af-edeed818faa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5f70c762-2438-4bdc-acd8-7f35c2b7c858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-464052 in cluster insufficient-storage-464052","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d2adc7e-e78f-4992-9333-3ddeafa1fc69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"eec7ba0b-216c-4e84-b657-40d5dc98d1ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"59c697a8-3da4-4971-8ff3-8d0c62790541","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-464052 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-464052 --output=json --layout=cluster: exit status 7 (342.175228ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-464052","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-464052","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:42:05.269208 1248661 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-464052" does not appear in /home/jenkins/minikube-integration/17764-1135857/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-464052 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-464052 --output=json --layout=cluster: exit status 7 (328.274511ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-464052","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-464052","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:42:05.597501 1248716 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-464052" does not appear in /home/jenkins/minikube-integration/17764-1135857/kubeconfig
	E1212 00:42:05.609637 1248716 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/insufficient-storage-464052/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-464052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-464052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-464052: (1.979323417s)
--- PASS: TestInsufficientStorage (13.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (96.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.2149396164.exe start -p running-upgrade-190224 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.2149396164.exe start -p running-upgrade-190224 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (55.04117038s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-190224 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-190224 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.359390892s)
helpers_test.go:175: Cleaning up "running-upgrade-190224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-190224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-190224: (3.892861044s)
--- PASS: TestRunningBinaryUpgrade (96.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (137.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 00:43:45.779766 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:44:12.324742 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.109256757s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-724872
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-724872: (1.280659155s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-724872 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-724872 status --format={{.Host}}: exit status 7 (86.139674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 00:45:08.822143 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.144788841s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-724872 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (111.636391ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-724872] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-724872
	    minikube start -p kubernetes-upgrade-724872 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7248722 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-724872 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 00:45:20.471071 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-724872 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.145491745s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-724872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-724872
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-724872: (2.420866166s)
--- PASS: TestKubernetesUpgrade (137.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (173.83s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.3641942451.exe start -p missing-upgrade-006597 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.3641942451.exe start -p missing-upgrade-006597 --memory=2200 --driver=docker  --container-runtime=containerd: (1m41.020636692s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-006597
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-006597: (1.079254099s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-006597
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-006597 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-006597 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.59496388s)
helpers_test.go:175: Cleaning up "missing-upgrade-006597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-006597
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-006597: (2.574786965s)
--- PASS: TestMissingContainerUpgrade (173.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454826 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-454826 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (137.848476ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-454826] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestPause/serial/Start (69.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-435880 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-435880 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m9.626292327s)
--- PASS: TestPause/serial/Start (69.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454826 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454826 --driver=docker  --container-runtime=containerd: (44.058110178s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-454826 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454826 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454826 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.444129838s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-454826 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-454826 status -o json: exit status 2 (346.451623ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-454826","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-454826
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-454826: (2.016379247s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454826 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454826 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.950563736s)
--- PASS: TestNoKubernetes/serial/Start (8.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-435880 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-435880 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.719090652s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-454826 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-454826 "sudo systemctl is-active --quiet service kubelet": exit status 1 (407.462544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-454826
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-454826: (1.318425615s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454826 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454826 --driver=docker  --container-runtime=containerd: (7.433733644s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.43s)

                                                
                                    
x
+
TestPause/serial/Pause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-435880 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-435880 --alsologtostderr -v=5: (1.016872388s)
--- PASS: TestPause/serial/Pause (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-435880 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-435880 --output=json --layout=cluster: exit status 2 (443.984732ms)

                                                
                                                
-- stdout --
	{"Name":"pause-435880","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-435880","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-435880 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.23s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-435880 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-435880 --alsologtostderr -v=5: (1.232981218s)
--- PASS: TestPause/serial/PauseAgain (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-454826 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-454826 "sudo systemctl is-active --quiet service kubelet": exit status 1 (455.130874ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-435880 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-435880 --alsologtostderr -v=5: (3.030911531s)
--- PASS: TestPause/serial/DeletePaused (3.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-435880
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-435880: exit status 1 (20.319464ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-435880: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.2534157691.exe start -p stopped-upgrade-887314 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.2534157691.exe start -p stopped-upgrade-887314 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.895881478s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.2534157691.exe -p stopped-upgrade-887314 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.2534157691.exe -p stopped-upgrade-887314 stop: (1.308144729s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-887314 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-887314 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.646107184s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-887314
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-887314: (1.372200211s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-734789 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-734789 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (313.225743ms)

                                                
                                                
-- stdout --
	* [false-734789] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:48:07.217457 1281605 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:48:07.217643 1281605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:48:07.217649 1281605 out.go:309] Setting ErrFile to fd 2...
	I1212 00:48:07.217655 1281605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:48:07.217896 1281605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1135857/.minikube/bin
	I1212 00:48:07.218311 1281605 out.go:303] Setting JSON to false
	I1212 00:48:07.219427 1281605 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27035,"bootTime":1702315053,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1212 00:48:07.219513 1281605 start.go:138] virtualization:  
	I1212 00:48:07.223687 1281605 out.go:177] * [false-734789] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:48:07.226282 1281605 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:48:07.226361 1281605 notify.go:220] Checking for updates...
	I1212 00:48:07.229108 1281605 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:48:07.231464 1281605 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1135857/kubeconfig
	I1212 00:48:07.234304 1281605 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1135857/.minikube
	I1212 00:48:07.236842 1281605 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:48:07.238941 1281605 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:48:07.242382 1281605 config.go:182] Loaded profile config "force-systemd-flag-162215": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1212 00:48:07.242621 1281605 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:48:07.273143 1281605 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:48:07.273275 1281605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:48:07.427073 1281605 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 00:48:07.416484582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:48:07.427176 1281605 docker.go:295] overlay module found
	I1212 00:48:07.432759 1281605 out.go:177] * Using the docker driver based on user configuration
	I1212 00:48:07.434832 1281605 start.go:298] selected driver: docker
	I1212 00:48:07.434853 1281605 start.go:902] validating driver "docker" against <nil>
	I1212 00:48:07.434867 1281605 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:48:07.437256 1281605 out.go:177] 
	W1212 00:48:07.439615 1281605 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1212 00:48:07.441719 1281605 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-734789 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-734789" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-734789

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734789"

                                                
                                                
----------------------- debugLogs end: false-734789 [took: 5.974405503s] --------------------------------
helpers_test.go:175: Cleaning up "false-734789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-734789
--- PASS: TestNetworkPlugins/group/false (6.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (119.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-424365 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1212 00:50:20.471263 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-424365 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m59.145996717s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (119.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-424365 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [796da7cf-7dbf-4ed8-bb02-65ac20205596] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [796da7cf-7dbf-4ed8-bb02-65ac20205596] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.03520413s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-424365 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-424365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-424365 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-424365 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-424365 --alsologtostderr -v=3: (12.281081589s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-424365 -n old-k8s-version-424365
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-424365 -n old-k8s-version-424365: exit status 7 (128.603317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-424365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (660.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-424365 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-424365 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m0.232244032s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-424365 -n old-k8s-version-424365
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (660.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-427561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1212 00:52:15.376853 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-427561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m28.913209437s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-427561 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c939578a-81c2-4b4f-ab42-22797aeba744] Pending
helpers_test.go:344: "busybox" [c939578a-81c2-4b4f-ab42-22797aeba744] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 00:53:45.779498 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c939578a-81c2-4b4f-ab42-22797aeba744] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.034551117s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-427561 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-427561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-427561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05198234s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-427561 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-427561 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-427561 --alsologtostderr -v=3: (12.200602938s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-427561 -n no-preload-427561
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-427561 -n no-preload-427561: exit status 7 (99.273322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-427561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (342.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-427561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1212 00:54:12.325070 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 00:55:20.470759 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:58:23.513719 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 00:58:45.780355 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 00:59:12.325002 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-427561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m42.567990119s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-427561 -n no-preload-427561
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (342.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lqmrq" [93bd3945-fe0e-4777-935f-c6a5851e6733] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lqmrq" [93bd3945-fe0e-4777-935f-c6a5851e6733] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.026946796s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lqmrq" [93bd3945-fe0e-4777-935f-c6a5851e6733] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010635712s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-427561 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-427561 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-427561 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-427561 -n no-preload-427561
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-427561 -n no-preload-427561: exit status 2 (380.286614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-427561 -n no-preload-427561
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-427561 -n no-preload-427561: exit status 2 (385.72894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-427561 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-427561 -n no-preload-427561
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-427561 -n no-preload-427561
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-951508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1212 01:00:20.471234 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-951508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m2.732125293s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-951508 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [467bc635-13f2-4ec3-8836-433ca82e9cc1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [467bc635-13f2-4ec3-8836-433ca82e9cc1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.031468347s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-951508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-951508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-951508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.124460463s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-951508 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-951508 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-951508 --alsologtostderr -v=3: (12.201479502s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-951508 -n embed-certs-951508
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-951508 -n embed-certs-951508: exit status 7 (101.417165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-951508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-951508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1212 01:01:48.823115 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-951508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m43.23714532s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-951508 -n embed-certs-951508
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-pzrk2" [ddbe2243-9ee8-46b3-ab88-468b9aded0d9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026874452s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-pzrk2" [ddbe2243-9ee8-46b3-ab88-468b9aded0d9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008737613s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-424365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-424365 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-424365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-424365 -n old-k8s-version-424365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-424365 -n old-k8s-version-424365: exit status 2 (391.284551ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-424365 -n old-k8s-version-424365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-424365 -n old-k8s-version-424365: exit status 2 (389.694687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-424365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-424365 -n old-k8s-version-424365
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-424365 -n old-k8s-version-424365
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-965664 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1212 01:03:43.286249 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.291443 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.301647 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.321951 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.362226 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.442546 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.602887 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:43.923411 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:44.564465 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:45.779868 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 01:03:45.845107 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:48.405996 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:03:53.526575 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:04:03.767090 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:04:12.324536 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-965664 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (58.955519478s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-965664 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dbcaa8d0-2820-40b2-9b00-5cf2ac4a353d] Pending
helpers_test.go:344: "busybox" [dbcaa8d0-2820-40b2-9b00-5cf2ac4a353d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 01:04:24.247875 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
helpers_test.go:344: "busybox" [dbcaa8d0-2820-40b2-9b00-5cf2ac4a353d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.035277244s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-965664 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-965664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-965664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.201848261s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-965664 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-965664 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-965664 --alsologtostderr -v=3: (12.312714401s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664: exit status 7 (98.866232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-965664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (343.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-965664 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1212 01:05:05.208080 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:05:20.470681 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
E1212 01:06:27.128236 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:06:42.082243 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.087770 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.098060 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.118379 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.158685 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.239210 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.399742 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:42.720338 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:43.361358 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:44.641627 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:47.201853 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:06:52.322062 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:07:02.563008 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:07:23.043508 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-965664 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m42.668770265s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (343.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tlb7z" [6d7daae1-4062-4be9-b73f-984f3c74a6c0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tlb7z" [6d7daae1-4062-4be9-b73f-984f3c74a6c0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.035720514s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tlb7z" [6d7daae1-4062-4be9-b73f-984f3c74a6c0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018488688s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-951508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-951508 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-951508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-951508 -n embed-certs-951508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-951508 -n embed-certs-951508: exit status 2 (376.055268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-951508 -n embed-certs-951508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-951508 -n embed-certs-951508: exit status 2 (399.216442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-951508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-951508 -n embed-certs-951508
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-951508 -n embed-certs-951508
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-402641 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1212 01:08:04.004557 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-402641 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (48.147338849s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-402641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-402641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.282927636s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-402641 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-402641 --alsologtostderr -v=3: (1.279317502s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-402641 -n newest-cni-402641
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-402641 -n newest-cni-402641: exit status 7 (103.958595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-402641 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-402641 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1212 01:08:43.286612 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:08:45.780283 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
E1212 01:08:55.377954 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
E1212 01:09:10.968389 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:09:12.325073 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-402641 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (31.955815798s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-402641 -n newest-cni-402641
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-402641 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-402641 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-402641 -n newest-cni-402641
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-402641 -n newest-cni-402641: exit status 2 (404.1359ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-402641 -n newest-cni-402641
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-402641 -n newest-cni-402641: exit status 2 (393.996013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-402641 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-402641 -n newest-cni-402641
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-402641 -n newest-cni-402641
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1212 01:09:25.925723 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:10:20.470619 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/functional-204186/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m27.793968994s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jjsbt" [0b18a298-10d9-4c1b-8a69-bb83764c2ee0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jjsbt" [0b18a298-10d9-4c1b-8a69-bb83764c2ee0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.027431764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jjsbt" [0b18a298-10d9-4c1b-8a69-bb83764c2ee0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010647481s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-965664 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-965664 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-965664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-965664 --alsologtostderr -v=1: (1.155892984s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664: exit status 2 (416.656668ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664: exit status 2 (417.747319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-965664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-965664 -n default-k8s-diff-port-965664
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.38s)
E1212 01:16:29.596763 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:16:42.081999 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ngpxf" [23a4b4cf-f03b-421e-9107-29280b5873d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ngpxf" [23a4b4cf-f03b-421e-9107-29280b5873d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.013326199s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m24.942375511s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1212 01:11:42.082180 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
E1212 01:12:09.766302 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/old-k8s-version-424365/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m15.399192599s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-g92kg" [0f8623a2-dbdc-476b-9451-df20203317c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.044837217s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n5bxk" [307ca979-9c4a-4794-ad07-9acad60ba653] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n5bxk" [307ca979-9c4a-4794-ad07-9acad60ba653] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.016009032s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-frbrl" [8ce9eba5-d99e-4f74-8d7c-1caebb98a1ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.052089819s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lvqgw" [af5108cf-2527-4e34-9d3e-e16236b18426] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lvqgw" [af5108cf-2527-4e34-9d3e-e16236b18426] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.035973763s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m9.579203169s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1212 01:13:43.286283 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/no-preload-427561/client.crt: no such file or directory
E1212 01:13:45.779669 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/ingress-addon-legacy-491046/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.314233501s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tpz29" [5e5a9800-b4a8-4345-9467-e73c4447a83c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tpz29" [5e5a9800-b4a8-4345-9467-e73c4447a83c] Running
E1212 01:14:12.324869 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/addons-004867/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.011475729s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1212 01:14:43.182443 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/default-k8s-diff-port-965664/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.348655655s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kv25z" [8b7461ec-308c-47e5-ac03-ca31a749ba78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kv25z" [8b7461ec-308c-47e5-ac03-ca31a749ba78] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.011940404s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1212 01:15:44.623526 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/default-k8s-diff-port-965664/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-734789 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m32.383662763s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t659c" [b5d1e864-7ef8-482e-abe0-e5babd032d6b] Running
E1212 01:15:48.629302 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:48.634693 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:48.647137 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:48.667491 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:48.707861 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:48.790267 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:48.951463 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:49.272070 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:49.913312 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
E1212 01:15:51.194433 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028851271s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-27dsk" [15b0a5a1-8ccf-4e8d-84b5-65b796f0993f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:15:53.754664 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-27dsk" [15b0a5a1-8ccf-4e8d-84b5-65b796f0993f] Running
E1212 01:15:58.875073 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/auto-734789/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.014131369s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-734789 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-734789 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dlkdb" [88f3dc75-4efa-46ed-80a0-60df3e814577] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dlkdb" [88f3dc75-4efa-46ed-80a0-60df3e814577] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.01073619s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-734789 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-734789 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1212 01:17:06.544390 1141281 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/default-k8s-diff-port-965664/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (31/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-412876 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-412876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-412876
--- SKIP: TestDownloadOnlyKic (0.66s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-188876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-188876
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-734789 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-734789" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-734789

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734789"

                                                
                                                
----------------------- debugLogs end: kubenet-734789 [took: 5.820745411s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-734789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-734789
--- SKIP: TestNetworkPlugins/group/kubenet (6.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-734789 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-734789" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17764-1135857/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 00:48:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: force-systemd-flag-162215
contexts:
- context:
cluster: force-systemd-flag-162215
extensions:
- extension:
last-update: Tue, 12 Dec 2023 00:48:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-162215
name: force-systemd-flag-162215
current-context: force-systemd-flag-162215
kind: Config
preferences: {}
users:
- name: force-systemd-flag-162215
user:
client-certificate: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/force-systemd-flag-162215/client.crt
client-key: /home/jenkins/minikube-integration/17764-1135857/.minikube/profiles/force-systemd-flag-162215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-734789

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-734789" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734789"

                                                
                                                
----------------------- debugLogs end: cilium-734789 [took: 6.151749115s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-734789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-734789
--- SKIP: TestNetworkPlugins/group/cilium (6.43s)

                                                
                                    
Copied to clipboard