Test Report: Docker_Linux_containerd_arm64 17217

                    
                      8716ac0c8da6d39536faafa0827bebe41e78f6a6:2023-09-14:31013
                    
                

Test fail (14/303)

x
+
TestAddons/parallel/Ingress (38.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-531284 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-531284 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-531284 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bdd355ef-96f3-466a-9771-f4880df77075] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bdd355ef-96f3-466a-9771-f4880df77075] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.011174028s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-531284 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.045226915s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-531284 addons disable ingress-dns --alsologtostderr -v=1: (1.195777802s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-531284 addons disable ingress --alsologtostderr -v=1: (8.003100797s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-531284
helpers_test.go:235: (dbg) docker inspect addons-531284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4cb8662e9b0d8b5c8abd0d72f29324f93c8b90155c4a61828f8d91911cb313f",
	        "Created": "2023-09-14T18:44:11.914369719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 498971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T18:44:12.284513747Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d5e38ecae883e5d7fbaaccc26de9209a95c7f11864ba7a4058d1702f044efe72",
	        "ResolvConfPath": "/var/lib/docker/containers/e4cb8662e9b0d8b5c8abd0d72f29324f93c8b90155c4a61828f8d91911cb313f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4cb8662e9b0d8b5c8abd0d72f29324f93c8b90155c4a61828f8d91911cb313f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4cb8662e9b0d8b5c8abd0d72f29324f93c8b90155c4a61828f8d91911cb313f/hosts",
	        "LogPath": "/var/lib/docker/containers/e4cb8662e9b0d8b5c8abd0d72f29324f93c8b90155c4a61828f8d91911cb313f/e4cb8662e9b0d8b5c8abd0d72f29324f93c8b90155c4a61828f8d91911cb313f-json.log",
	        "Name": "/addons-531284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-531284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-531284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3772039fadf75321220af313e44d4550635634d2b90d6c70c67a6bc53ea09501-init/diff:/var/lib/docker/overlay2/b22941fdffad93645039179e8c1eee3cd74765d1689d400cab1ec16e85e4dbbf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3772039fadf75321220af313e44d4550635634d2b90d6c70c67a6bc53ea09501/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3772039fadf75321220af313e44d4550635634d2b90d6c70c67a6bc53ea09501/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3772039fadf75321220af313e44d4550635634d2b90d6c70c67a6bc53ea09501/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-531284",
	                "Source": "/var/lib/docker/volumes/addons-531284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-531284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-531284",
	                "name.minikube.sigs.k8s.io": "addons-531284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a945ca44924820219ff9cb51ae730c5cae4cbc07aa0024bb3fb15cb0a456ba11",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a945ca449248",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-531284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e4cb8662e9b0",
	                        "addons-531284"
	                    ],
	                    "NetworkID": "25a715a44e06cb015b72cac97695e3b47b30d9f1ca0cfa883e632dd3b92578df",
	                    "EndpointID": "2c64abf23ac7733cd9b8e691b800bbb276e447e2b3ab2b52a66e93333828f02e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-531284 -n addons-531284
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-531284 logs -n 25: (1.865220976s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-715947   | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |                     |
	|         | -p download-only-715947        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-715947   | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |                     |
	|         | -p download-only-715947        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC | 14 Sep 23 18:43 UTC |
	| delete  | -p download-only-715947        | download-only-715947   | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC | 14 Sep 23 18:43 UTC |
	| delete  | -p download-only-715947        | download-only-715947   | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC | 14 Sep 23 18:43 UTC |
	| start   | --download-only -p             | download-docker-294869 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |                     |
	|         | download-docker-294869         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p download-docker-294869      | download-docker-294869 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC | 14 Sep 23 18:43 UTC |
	| start   | --download-only -p             | binary-mirror-080763   | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |                     |
	|         | binary-mirror-080763           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43515         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-080763        | binary-mirror-080763   | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC | 14 Sep 23 18:43 UTC |
	| start   | -p addons-531284               | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC | 14 Sep 23 18:46 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	|         | addons-531284                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	|         | -p addons-531284               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-531284 ip               | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	| addons  | addons-531284 addons disable   | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-531284 addons           | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	|         | addons-531284                  |                        |         |         |                     |                     |
	| ssh     | addons-531284 ssh curl -s      | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-531284 ip               | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:46 UTC | 14 Sep 23 18:46 UTC |
	| addons  | addons-531284 addons disable   | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:47 UTC | 14 Sep 23 18:47 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-531284 addons disable   | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:47 UTC | 14 Sep 23 18:47 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons  | addons-531284 addons           | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:47 UTC | 14 Sep 23 18:47 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-531284 addons           | addons-531284          | jenkins | v1.31.2 | 14 Sep 23 18:47 UTC |                     |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:43:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:43:47.540455  498519 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:43:47.540774  498519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:47.540806  498519 out.go:309] Setting ErrFile to fd 2...
	I0914 18:43:47.540827  498519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:47.541161  498519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:43:47.541708  498519 out.go:303] Setting JSON to false
	I0914 18:43:47.542719  498519 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15971,"bootTime":1694701057,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:43:47.542828  498519 start.go:138] virtualization:  
	I0914 18:43:47.545489  498519 out.go:177] * [addons-531284] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:43:47.548170  498519 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:43:47.548296  498519 notify.go:220] Checking for updates...
	I0914 18:43:47.549982  498519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:43:47.552170  498519 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:43:47.554284  498519 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:43:47.555941  498519 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:43:47.558109  498519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:43:47.560085  498519 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:43:47.583262  498519 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:43:47.583369  498519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:43:47.674490  498519 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-14 18:43:47.663267492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:43:47.674602  498519 docker.go:294] overlay module found
	I0914 18:43:47.676837  498519 out.go:177] * Using the docker driver based on user configuration
	I0914 18:43:47.678905  498519 start.go:298] selected driver: docker
	I0914 18:43:47.678920  498519 start.go:902] validating driver "docker" against <nil>
	I0914 18:43:47.678933  498519 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:43:47.679543  498519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:43:47.744511  498519 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-14 18:43:47.734867638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:43:47.744725  498519 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 18:43:47.744995  498519 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:43:47.746972  498519 out.go:177] * Using Docker driver with root privileges
	I0914 18:43:47.748708  498519 cni.go:84] Creating CNI manager for ""
	I0914 18:43:47.748733  498519 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:43:47.748745  498519 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 18:43:47.748761  498519 start_flags.go:321] config:
	{Name:addons-531284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-531284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:43:47.750923  498519 out.go:177] * Starting control plane node addons-531284 in cluster addons-531284
	I0914 18:43:47.752544  498519 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0914 18:43:47.754340  498519 out.go:177] * Pulling base image ...
	I0914 18:43:47.756179  498519 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:43:47.756230  498519 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4
	I0914 18:43:47.756244  498519 cache.go:57] Caching tarball of preloaded images
	I0914 18:43:47.756270  498519 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0914 18:43:47.756316  498519 preload.go:174] Found /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 18:43:47.756327  498519 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on containerd
	I0914 18:43:47.756762  498519 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/config.json ...
	I0914 18:43:47.756797  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/config.json: {Name:mk0d288d1432fd5f8295deabf9cf1db3ad2397b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:43:47.773611  498519 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0914 18:43:47.773736  498519 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory
	I0914 18:43:47.773759  498519 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory, skipping pull
	I0914 18:43:47.773768  498519 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in cache, skipping pull
	I0914 18:43:47.773776  498519 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 as a tarball
	I0914 18:43:47.773785  498519 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 from local cache
	I0914 18:44:03.878964  498519 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 from cached tarball
	I0914 18:44:03.879003  498519 cache.go:195] Successfully downloaded all kic artifacts
	I0914 18:44:03.879056  498519 start.go:365] acquiring machines lock for addons-531284: {Name:mkb481e53f08984ec1964af195f80aca9a88a7f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:44:03.879184  498519 start.go:369] acquired machines lock for "addons-531284" in 105.748µs
	I0914 18:44:03.879215  498519 start.go:93] Provisioning new machine with config: &{Name:addons-531284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-531284 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:44:03.879310  498519 start.go:125] createHost starting for "" (driver="docker")
	I0914 18:44:03.881959  498519 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 18:44:03.882212  498519 start.go:159] libmachine.API.Create for "addons-531284" (driver="docker")
	I0914 18:44:03.882237  498519 client.go:168] LocalClient.Create starting
	I0914 18:44:03.882373  498519 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem
	I0914 18:44:04.629956  498519 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem
	I0914 18:44:05.380061  498519 cli_runner.go:164] Run: docker network inspect addons-531284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 18:44:05.399294  498519 cli_runner.go:211] docker network inspect addons-531284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 18:44:05.399379  498519 network_create.go:281] running [docker network inspect addons-531284] to gather additional debugging logs...
	I0914 18:44:05.399399  498519 cli_runner.go:164] Run: docker network inspect addons-531284
	W0914 18:44:05.416851  498519 cli_runner.go:211] docker network inspect addons-531284 returned with exit code 1
	I0914 18:44:05.416888  498519 network_create.go:284] error running [docker network inspect addons-531284]: docker network inspect addons-531284: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-531284 not found
	I0914 18:44:05.416904  498519 network_create.go:286] output of [docker network inspect addons-531284]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-531284 not found
	
	** /stderr **
	I0914 18:44:05.417002  498519 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:44:05.434801  498519 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400025cf10}
	I0914 18:44:05.434842  498519 network_create.go:123] attempt to create docker network addons-531284 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 18:44:05.434902  498519 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-531284 addons-531284
	I0914 18:44:05.508769  498519 network_create.go:107] docker network addons-531284 192.168.49.0/24 created
	I0914 18:44:05.508800  498519 kic.go:117] calculated static IP "192.168.49.2" for the "addons-531284" container
	I0914 18:44:05.508883  498519 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 18:44:05.529005  498519 cli_runner.go:164] Run: docker volume create addons-531284 --label name.minikube.sigs.k8s.io=addons-531284 --label created_by.minikube.sigs.k8s.io=true
	I0914 18:44:05.550546  498519 oci.go:103] Successfully created a docker volume addons-531284
	I0914 18:44:05.550642  498519 cli_runner.go:164] Run: docker run --rm --name addons-531284-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-531284 --entrypoint /usr/bin/test -v addons-531284:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0914 18:44:07.664079  498519 cli_runner.go:217] Completed: docker run --rm --name addons-531284-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-531284 --entrypoint /usr/bin/test -v addons-531284:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib: (2.113393995s)
	I0914 18:44:07.664111  498519 oci.go:107] Successfully prepared a docker volume addons-531284
	I0914 18:44:07.664137  498519 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:44:07.664156  498519 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 18:44:07.664244  498519 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-531284:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 18:44:11.832556  498519 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-531284:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (4.168263923s)
	I0914 18:44:11.832601  498519 kic.go:199] duration metric: took 4.168441 seconds to extract preloaded images to volume
	W0914 18:44:11.832739  498519 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 18:44:11.832852  498519 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 18:44:11.898268  498519 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-531284 --name addons-531284 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-531284 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-531284 --network addons-531284 --ip 192.168.49.2 --volume addons-531284:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0914 18:44:12.292869  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Running}}
	I0914 18:44:12.316255  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:12.340313  498519 cli_runner.go:164] Run: docker exec addons-531284 stat /var/lib/dpkg/alternatives/iptables
	I0914 18:44:12.421623  498519 oci.go:144] the created container "addons-531284" has a running status.
	I0914 18:44:12.421649  498519 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa...
	I0914 18:44:13.241633  498519 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 18:44:13.273564  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:13.299593  498519 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 18:44:13.299613  498519 kic_runner.go:114] Args: [docker exec --privileged addons-531284 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 18:44:13.394407  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:13.426528  498519 machine.go:88] provisioning docker machine ...
	I0914 18:44:13.426559  498519 ubuntu.go:169] provisioning hostname "addons-531284"
	I0914 18:44:13.426637  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:13.447727  498519 main.go:141] libmachine: Using SSH client type: native
	I0914 18:44:13.448153  498519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I0914 18:44:13.448166  498519 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-531284 && echo "addons-531284" | sudo tee /etc/hostname
	I0914 18:44:13.617054  498519 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-531284
	
	I0914 18:44:13.617141  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:13.643355  498519 main.go:141] libmachine: Using SSH client type: native
	I0914 18:44:13.643758  498519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I0914 18:44:13.643776  498519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-531284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-531284/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-531284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:44:13.793722  498519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:44:13.793749  498519 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17217-492678/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-492678/.minikube}
	I0914 18:44:13.793769  498519 ubuntu.go:177] setting up certificates
	I0914 18:44:13.793778  498519 provision.go:83] configureAuth start
	I0914 18:44:13.793842  498519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-531284
	I0914 18:44:13.811172  498519 provision.go:138] copyHostCerts
	I0914 18:44:13.811259  498519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem (1082 bytes)
	I0914 18:44:13.811377  498519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem (1123 bytes)
	I0914 18:44:13.811437  498519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem (1679 bytes)
	I0914 18:44:13.811486  498519 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem org=jenkins.addons-531284 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-531284]
	I0914 18:44:14.324973  498519 provision.go:172] copyRemoteCerts
	I0914 18:44:14.325045  498519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:44:14.325087  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:14.347475  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:14.447484  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:44:14.477445  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 18:44:14.506813  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:44:14.534922  498519 provision.go:86] duration metric: configureAuth took 741.126754ms
	I0914 18:44:14.534950  498519 ubuntu.go:193] setting minikube options for container-runtime
	I0914 18:44:14.535139  498519 config.go:182] Loaded profile config "addons-531284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:44:14.535155  498519 machine.go:91] provisioned docker machine in 1.108609337s
	I0914 18:44:14.535161  498519 client.go:171] LocalClient.Create took 10.652919149s
	I0914 18:44:14.535179  498519 start.go:167] duration metric: libmachine.API.Create for "addons-531284" took 10.652967912s
	I0914 18:44:14.535190  498519 start.go:300] post-start starting for "addons-531284" (driver="docker")
	I0914 18:44:14.535198  498519 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:44:14.535256  498519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:44:14.535300  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:14.552557  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:14.652039  498519 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:44:14.656537  498519 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 18:44:14.656576  498519 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 18:44:14.656636  498519 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 18:44:14.656644  498519 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 18:44:14.656655  498519 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/addons for local assets ...
	I0914 18:44:14.656729  498519 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/files for local assets ...
	I0914 18:44:14.656756  498519 start.go:303] post-start completed in 121.560868ms
	I0914 18:44:14.657103  498519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-531284
	I0914 18:44:14.674833  498519 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/config.json ...
	I0914 18:44:14.675129  498519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:44:14.675183  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:14.692946  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:14.786859  498519 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 18:44:14.792618  498519 start.go:128] duration metric: createHost completed in 10.913292752s
	I0914 18:44:14.792640  498519 start.go:83] releasing machines lock for "addons-531284", held for 10.913443677s
	I0914 18:44:14.792714  498519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-531284
	I0914 18:44:14.810133  498519 ssh_runner.go:195] Run: cat /version.json
	I0914 18:44:14.810192  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:14.810488  498519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:44:14.810549  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:14.828533  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:14.838591  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:14.925253  498519 ssh_runner.go:195] Run: systemctl --version
	I0914 18:44:15.065654  498519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 18:44:15.072414  498519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 18:44:15.106998  498519 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 18:44:15.107141  498519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:44:15.146709  498519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 18:44:15.146775  498519 start.go:469] detecting cgroup driver to use...
	I0914 18:44:15.146818  498519 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 18:44:15.146907  498519 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 18:44:15.163113  498519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 18:44:15.178258  498519 docker.go:196] disabling cri-docker service (if available) ...
	I0914 18:44:15.178349  498519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:44:15.194597  498519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:44:15.213097  498519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:44:15.302561  498519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:44:15.413804  498519 docker.go:212] disabling docker service ...
	I0914 18:44:15.413898  498519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:44:15.436757  498519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:44:15.451108  498519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:44:15.542816  498519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:44:15.639312  498519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:44:15.653190  498519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:44:15.673924  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 18:44:15.686304  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 18:44:15.698347  498519 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 18:44:15.698415  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 18:44:15.710408  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:44:15.722565  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 18:44:15.735062  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:44:15.747285  498519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:44:15.759090  498519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 18:44:15.771871  498519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:44:15.782028  498519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:44:15.792283  498519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:44:15.880760  498519 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 18:44:16.021097  498519 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 18:44:16.021249  498519 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 18:44:16.029854  498519 start.go:537] Will wait 60s for crictl version
	I0914 18:44:16.029970  498519 ssh_runner.go:195] Run: which crictl
	I0914 18:44:16.034606  498519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:44:16.081873  498519 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.22
	RuntimeApiVersion:  v1
	I0914 18:44:16.081963  498519 ssh_runner.go:195] Run: containerd --version
	I0914 18:44:16.110166  498519 ssh_runner.go:195] Run: containerd --version
	I0914 18:44:16.145412  498519 out.go:177] * Preparing Kubernetes v1.28.1 on containerd 1.6.22 ...
	I0914 18:44:16.147439  498519 cli_runner.go:164] Run: docker network inspect addons-531284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:44:16.164929  498519 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 18:44:16.169554  498519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:44:16.183284  498519 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:44:16.183351  498519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:44:16.226158  498519 containerd.go:604] all images are preloaded for containerd runtime.
	I0914 18:44:16.226185  498519 containerd.go:518] Images already preloaded, skipping extraction
	I0914 18:44:16.226248  498519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:44:16.265949  498519 containerd.go:604] all images are preloaded for containerd runtime.
	I0914 18:44:16.265972  498519 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:44:16.266030  498519 ssh_runner.go:195] Run: sudo crictl info
	I0914 18:44:16.307866  498519 cni.go:84] Creating CNI manager for ""
	I0914 18:44:16.307891  498519 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:44:16.307922  498519 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 18:44:16.307943  498519 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-531284 NodeName:addons-531284 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:44:16.308074  498519 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-531284"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:44:16.308144  498519 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-531284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-531284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 18:44:16.308212  498519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 18:44:16.319156  498519 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:44:16.319281  498519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:44:16.330080  498519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0914 18:44:16.351955  498519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:44:16.373483  498519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0914 18:44:16.394891  498519 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 18:44:16.399543  498519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:44:16.412843  498519 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284 for IP: 192.168.49.2
	I0914 18:44:16.412876  498519 certs.go:190] acquiring lock for shared ca certs: {Name:mka5985e85be7ad08b440e022e8dd6d327029a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:16.413680  498519 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key
	I0914 18:44:16.687831  498519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt ...
	I0914 18:44:16.687862  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt: {Name:mkcd3891f9f514b960d5615d3167dff53cd3ba1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:16.688613  498519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key ...
	I0914 18:44:16.688629  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key: {Name:mk7a994628320d94e2fc1127bf5e0ef4f0957cae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:16.688723  498519 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key
	I0914 18:44:16.966901  498519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt ...
	I0914 18:44:16.966933  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt: {Name:mkac923427e67b19db27155f023aa386c22f7d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:16.967121  498519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key ...
	I0914 18:44:16.967134  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key: {Name:mk5ba78b5f034ed70d3d14d4a6c1c0671354cad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:16.967247  498519 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.key
	I0914 18:44:16.967262  498519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt with IP's: []
	I0914 18:44:17.598136  498519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt ...
	I0914 18:44:17.598173  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: {Name:mkf984e1b4eb82f838d5bd1661fc48492dbac923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:17.598423  498519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.key ...
	I0914 18:44:17.598438  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.key: {Name:mk76dc74d2df981cd5dc982435a533465d0f1d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:17.598546  498519 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.key.dd3b5fb2
	I0914 18:44:17.598567  498519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 18:44:17.968053  498519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.crt.dd3b5fb2 ...
	I0914 18:44:17.968089  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.crt.dd3b5fb2: {Name:mk6d08311423c6fc8e0ec8bce05fc22e4b22c257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:17.968997  498519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.key.dd3b5fb2 ...
	I0914 18:44:17.969019  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.key.dd3b5fb2: {Name:mk9c6ceaad1a0ebf6a57e62be647aa189875a1ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:17.969170  498519 certs.go:337] copying /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.crt
	I0914 18:44:17.969250  498519 certs.go:341] copying /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.key
	I0914 18:44:17.969304  498519 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.key
	I0914 18:44:17.969324  498519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.crt with IP's: []
	I0914 18:44:18.431796  498519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.crt ...
	I0914 18:44:18.431829  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.crt: {Name:mk08363bb761a0636a4d41573ebffe802b733796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:18.432093  498519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.key ...
	I0914 18:44:18.432136  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.key: {Name:mk3d3aeecc0ca733f889476d3e8da55c306e6903 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:18.432891  498519 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:44:18.432942  498519 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:44:18.432971  498519 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:44:18.433001  498519 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem (1679 bytes)
	I0914 18:44:18.433657  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 18:44:18.463964  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:44:18.493328  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:44:18.521960  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:44:18.551976  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:44:18.580351  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 18:44:18.608929  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:44:18.636886  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:44:18.664320  498519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:44:18.692966  498519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:44:18.714008  498519 ssh_runner.go:195] Run: openssl version
	I0914 18:44:18.721092  498519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:44:18.732984  498519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:44:18.737647  498519 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:44:18.737715  498519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:44:18.746109  498519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:44:18.757863  498519 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 18:44:18.762118  498519 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 18:44:18.762177  498519 kubeadm.go:404] StartCluster: {Name:addons-531284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-531284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:44:18.762273  498519 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 18:44:18.762328  498519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:44:18.805817  498519 cri.go:89] found id: ""
	I0914 18:44:18.805900  498519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:44:18.816541  498519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:44:18.827385  498519 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 18:44:18.827496  498519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:44:18.838609  498519 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:44:18.838653  498519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 18:44:18.946684  498519 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 18:44:19.033378  498519 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:44:33.793445  498519 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 18:44:33.793508  498519 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 18:44:33.793601  498519 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 18:44:33.793664  498519 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 18:44:33.793709  498519 kubeadm.go:322] OS: Linux
	I0914 18:44:33.793783  498519 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 18:44:33.793842  498519 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 18:44:33.793901  498519 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 18:44:33.793958  498519 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 18:44:33.794026  498519 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 18:44:33.794093  498519 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 18:44:33.794144  498519 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0914 18:44:33.794207  498519 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0914 18:44:33.794268  498519 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0914 18:44:33.794345  498519 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:44:33.794441  498519 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:44:33.794551  498519 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:44:33.794627  498519 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:44:33.797699  498519 out.go:204]   - Generating certificates and keys ...
	I0914 18:44:33.797932  498519 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 18:44:33.798044  498519 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 18:44:33.798193  498519 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 18:44:33.798341  498519 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 18:44:33.798462  498519 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 18:44:33.798584  498519 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 18:44:33.798671  498519 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 18:44:33.798849  498519 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-531284 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 18:44:33.799081  498519 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 18:44:33.799261  498519 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-531284 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 18:44:33.799384  498519 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 18:44:33.799515  498519 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 18:44:33.799580  498519 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 18:44:33.799649  498519 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:44:33.799708  498519 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:44:33.799778  498519 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:44:33.799859  498519 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:44:33.799915  498519 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:44:33.800012  498519 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:44:33.800103  498519 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:44:33.803331  498519 out.go:204]   - Booting up control plane ...
	I0914 18:44:33.803460  498519 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:44:33.803543  498519 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:44:33.803610  498519 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:44:33.803712  498519 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:44:33.803871  498519 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:44:33.803913  498519 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 18:44:33.804128  498519 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:44:33.804223  498519 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503042 seconds
	I0914 18:44:33.804327  498519 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:44:33.804453  498519 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:44:33.804511  498519 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:44:33.804752  498519 kubeadm.go:322] [mark-control-plane] Marking the node addons-531284 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:44:33.804945  498519 kubeadm.go:322] [bootstrap-token] Using token: 3fbylt.32ioygl18gkco7xw
	I0914 18:44:33.807385  498519 out.go:204]   - Configuring RBAC rules ...
	I0914 18:44:33.807652  498519 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:44:33.807833  498519 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:44:33.807987  498519 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:44:33.808176  498519 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:44:33.808315  498519 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:44:33.808425  498519 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:44:33.808552  498519 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:44:33.808619  498519 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 18:44:33.808667  498519 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 18:44:33.808672  498519 kubeadm.go:322] 
	I0914 18:44:33.808739  498519 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 18:44:33.808744  498519 kubeadm.go:322] 
	I0914 18:44:33.808822  498519 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 18:44:33.808827  498519 kubeadm.go:322] 
	I0914 18:44:33.808852  498519 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 18:44:33.808912  498519 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:44:33.808965  498519 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:44:33.808969  498519 kubeadm.go:322] 
	I0914 18:44:33.809028  498519 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 18:44:33.809032  498519 kubeadm.go:322] 
	I0914 18:44:33.809081  498519 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:44:33.809093  498519 kubeadm.go:322] 
	I0914 18:44:33.809147  498519 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 18:44:33.809242  498519 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:44:33.809311  498519 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:44:33.809316  498519 kubeadm.go:322] 
	I0914 18:44:33.809401  498519 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:44:33.809478  498519 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 18:44:33.809483  498519 kubeadm.go:322] 
	I0914 18:44:33.809568  498519 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3fbylt.32ioygl18gkco7xw \
	I0914 18:44:33.809672  498519 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9891dba8af05d8d789a2289ec0f3d6b8812b95541089682ca62328aa5c24a5b6 \
	I0914 18:44:33.809693  498519 kubeadm.go:322] 	--control-plane 
	I0914 18:44:33.809698  498519 kubeadm.go:322] 
	I0914 18:44:33.809784  498519 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:44:33.809788  498519 kubeadm.go:322] 
	I0914 18:44:33.809871  498519 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3fbylt.32ioygl18gkco7xw \
	I0914 18:44:33.809985  498519 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9891dba8af05d8d789a2289ec0f3d6b8812b95541089682ca62328aa5c24a5b6 
	I0914 18:44:33.809993  498519 cni.go:84] Creating CNI manager for ""
	I0914 18:44:33.810001  498519 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:44:33.814358  498519 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 18:44:33.816848  498519 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 18:44:33.825102  498519 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 18:44:33.825126  498519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 18:44:33.853584  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 18:44:34.816972  498519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:44:34.817149  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:34.817254  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b minikube.k8s.io/name=addons-531284 minikube.k8s.io/updated_at=2023_09_14T18_44_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:34.837113  498519 ops.go:34] apiserver oom_adj: -16
	I0914 18:44:35.067128  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:35.162133  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:35.760491  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:36.260321  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:36.760760  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:37.260817  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:37.760469  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:38.260856  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:38.760863  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:39.260221  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:39.760199  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:40.260238  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:40.760202  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:41.260668  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:41.761011  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:42.260623  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:42.760751  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:43.260704  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:43.760149  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:44.260219  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:44.760870  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:45.260252  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:45.760638  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:46.260340  498519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:44:46.375936  498519 kubeadm.go:1081] duration metric: took 11.558873797s to wait for elevateKubeSystemPrivileges.
	I0914 18:44:46.375961  498519 kubeadm.go:406] StartCluster complete in 27.613802824s
	I0914 18:44:46.375978  498519 settings.go:142] acquiring lock: {Name:mkfaf0f329c2736368d7fc21433e53e0c9a5b1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:46.376608  498519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:44:46.377016  498519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/kubeconfig: {Name:mk6a8e8b5c770de881617bb4e8ebf560fd4b6800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:44:46.377244  498519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 18:44:46.377585  498519 config.go:182] Loaded profile config "addons-531284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:44:46.377762  498519 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 18:44:46.377874  498519 addons.go:69] Setting volumesnapshots=true in profile "addons-531284"
	I0914 18:44:46.377902  498519 addons.go:231] Setting addon volumesnapshots=true in "addons-531284"
	I0914 18:44:46.377967  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.378556  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.379054  498519 addons.go:69] Setting cloud-spanner=true in profile "addons-531284"
	I0914 18:44:46.379072  498519 addons.go:231] Setting addon cloud-spanner=true in "addons-531284"
	I0914 18:44:46.379108  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.379506  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.379569  498519 addons.go:69] Setting inspektor-gadget=true in profile "addons-531284"
	I0914 18:44:46.379585  498519 addons.go:231] Setting addon inspektor-gadget=true in "addons-531284"
	I0914 18:44:46.379617  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.380016  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.380490  498519 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-531284"
	I0914 18:44:46.380539  498519 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-531284"
	I0914 18:44:46.380576  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.380996  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.383304  498519 addons.go:69] Setting default-storageclass=true in profile "addons-531284"
	I0914 18:44:46.383335  498519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-531284"
	I0914 18:44:46.383647  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.392467  498519 addons.go:69] Setting metrics-server=true in profile "addons-531284"
	I0914 18:44:46.394777  498519 addons.go:231] Setting addon metrics-server=true in "addons-531284"
	I0914 18:44:46.394888  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.395419  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.397843  498519 addons.go:69] Setting registry=true in profile "addons-531284"
	I0914 18:44:46.397927  498519 addons.go:231] Setting addon registry=true in "addons-531284"
	I0914 18:44:46.398006  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.398566  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.413832  498519 addons.go:69] Setting storage-provisioner=true in profile "addons-531284"
	I0914 18:44:46.394142  498519 addons.go:69] Setting gcp-auth=true in profile "addons-531284"
	I0914 18:44:46.394154  498519 addons.go:69] Setting ingress=true in profile "addons-531284"
	I0914 18:44:46.394160  498519 addons.go:69] Setting ingress-dns=true in profile "addons-531284"
	I0914 18:44:46.423246  498519 addons.go:231] Setting addon ingress-dns=true in "addons-531284"
	I0914 18:44:46.423325  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.434704  498519 addons.go:231] Setting addon storage-provisioner=true in "addons-531284"
	I0914 18:44:46.434818  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.435315  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.446582  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.471385  498519 mustload.go:65] Loading cluster: addons-531284
	I0914 18:44:46.471682  498519 config.go:182] Loaded profile config "addons-531284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:44:46.472002  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.494535  498519 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0914 18:44:46.489657  498519 addons.go:231] Setting addon ingress=true in "addons-531284"
	I0914 18:44:46.500002  498519 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 18:44:46.500072  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.506252  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.513068  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 18:44:46.520743  498519 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 18:44:46.520821  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 18:44:46.520929  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.515232  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 18:44:46.554625  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 18:44:46.515255  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 18:44:46.555974  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.567561  498519 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0914 18:44:46.559890  498519 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-531284" context rescaled to 1 replicas
	I0914 18:44:46.575962  498519 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0914 18:44:46.576168  498519 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:44:46.584808  498519 out.go:177] * Verifying Kubernetes components...
	I0914 18:44:46.588718  498519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:44:46.584716  498519 out.go:177]   - Using image docker.io/registry:2.8.1
	I0914 18:44:46.584780  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 18:44:46.590036  498519 addons.go:231] Setting addon default-storageclass=true in "addons-531284"
	I0914 18:44:46.593405  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.593881  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:46.594088  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 18:44:46.596780  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 18:44:46.594450  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.627232  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 18:44:46.625103  498519 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0914 18:44:46.637006  498519 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0914 18:44:46.632744  498519 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 18:44:46.642384  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 18:44:46.645047  498519 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 18:44:46.645112  498519 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 18:44:46.659309  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0914 18:44:46.659394  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.659795  498519 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:44:46.660005  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:44:46.660113  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.674659  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 18:44:46.674840  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.678415  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 18:44:46.680323  498519 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 18:44:46.681885  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 18:44:46.681934  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 18:44:46.682031  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.699794  498519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:44:46.704118  498519 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:44:46.704140  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:44:46.704205  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.715474  498519 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 18:44:46.715657  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:46.719766  498519 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0914 18:44:46.721788  498519 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 18:44:46.726254  498519 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 18:44:46.726276  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0914 18:44:46.726343  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.810851  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.819053  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.852389  498519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 18:44:46.853716  498519 node_ready.go:35] waiting up to 6m0s for node "addons-531284" to be "Ready" ...
	I0914 18:44:46.861238  498519 node_ready.go:49] node "addons-531284" has status "Ready":"True"
	I0914 18:44:46.861313  498519 node_ready.go:38] duration metric: took 7.567106ms waiting for node "addons-531284" to be "Ready" ...
	I0914 18:44:46.861339  498519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:44:46.880100  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.895212  498519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace to be "Ready" ...
	I0914 18:44:46.905320  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.908141  498519 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:44:46.908162  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:44:46.908221  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:46.910420  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.926031  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.940731  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.965311  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.967290  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:46.968396  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:47.288751  498519 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 18:44:47.288822  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 18:44:47.473655  498519 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 18:44:47.473725  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 18:44:47.491936  498519 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 18:44:47.492006  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 18:44:47.515801  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 18:44:47.515825  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 18:44:47.585740  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 18:44:47.590842  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:44:47.620661  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 18:44:47.631156  498519 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 18:44:47.631177  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 18:44:47.636811  498519 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 18:44:47.636832  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 18:44:47.657302  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:44:47.695276  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 18:44:47.704449  498519 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:44:47.704519  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 18:44:47.723742  498519 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 18:44:47.723815  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 18:44:47.780533  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 18:44:47.780557  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 18:44:47.881455  498519 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 18:44:47.881478  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 18:44:47.886246  498519 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:44:47.886267  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:44:47.926775  498519 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 18:44:47.926802  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 18:44:47.929119  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 18:44:47.929141  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 18:44:47.978427  498519 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 18:44:47.978450  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 18:44:48.172484  498519 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 18:44:48.172553  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 18:44:48.188606  498519 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:44:48.188666  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:44:48.224123  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 18:44:48.224194  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 18:44:48.239844  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 18:44:48.254620  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 18:44:48.254692  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 18:44:48.407986  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:44:48.442522  498519 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 18:44:48.442591  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 18:44:48.464771  498519 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 18:44:48.464841  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 18:44:48.544673  498519 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 18:44:48.544735  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 18:44:48.756504  498519 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 18:44:48.756596  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0914 18:44:48.782285  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 18:44:48.938259  498519 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 18:44:48.938327  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 18:44:48.941056  498519 pod_ready.go:102] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"False"
	I0914 18:44:49.149193  498519 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.296765479s)
	I0914 18:44:49.149289  498519 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 18:44:49.186675  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 18:44:49.257190  498519 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 18:44:49.257262  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 18:44:49.566809  498519 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 18:44:49.566891  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 18:44:49.715042  498519 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 18:44:49.715115  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 18:44:49.900104  498519 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 18:44:49.900130  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 18:44:50.210842  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 18:44:50.887018  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.301231152s)
	I0914 18:44:50.887095  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.296229418s)
	I0914 18:44:50.887140  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.266458868s)
	I0914 18:44:51.266930  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.609538377s)
	I0914 18:44:51.440230  498519 pod_ready.go:102] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"False"
	I0914 18:44:53.455531  498519 pod_ready.go:102] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"False"
	I0914 18:44:53.600288  498519 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 18:44:53.600434  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:53.631826  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:53.701868  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.006552782s)
	I0914 18:44:53.701907  498519 addons.go:467] Verifying addon ingress=true in "addons-531284"
	I0914 18:44:53.704073  498519 out.go:177] * Verifying ingress addon...
	I0914 18:44:53.702085  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.462167605s)
	I0914 18:44:53.702158  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.294106028s)
	I0914 18:44:53.702233  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.919871483s)
	I0914 18:44:53.702280  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.515515018s)
	I0914 18:44:53.709745  498519 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 18:44:53.709968  498519 addons.go:467] Verifying addon registry=true in "addons-531284"
	I0914 18:44:53.720516  498519 out.go:177] * Verifying registry addon...
	I0914 18:44:53.710231  498519 addons.go:467] Verifying addon metrics-server=true in "addons-531284"
	W0914 18:44:53.710266  498519 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 18:44:53.715136  498519 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 18:44:53.722859  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:53.723879  498519 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 18:44:53.724038  498519 retry.go:31] will retry after 227.086231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 18:44:53.732014  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:53.732257  498519 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 18:44:53.732271  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:53.736261  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:53.912138  498519 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 18:44:53.951753  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 18:44:53.961829  498519 addons.go:231] Setting addon gcp-auth=true in "addons-531284"
	I0914 18:44:53.961930  498519 host.go:66] Checking if "addons-531284" exists ...
	I0914 18:44:53.962440  498519 cli_runner.go:164] Run: docker container inspect addons-531284 --format={{.State.Status}}
	I0914 18:44:53.984946  498519 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 18:44:53.985047  498519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-531284
	I0914 18:44:54.019613  498519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/addons-531284/id_rsa Username:docker}
	I0914 18:44:54.237201  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:54.242078  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:54.738587  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:54.743455  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:55.264045  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:55.265897  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:55.454172  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.243262964s)
	I0914 18:44:55.454215  498519 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-531284"
	I0914 18:44:55.457178  498519 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 18:44:55.460628  498519 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 18:44:55.468485  498519 pod_ready.go:102] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"False"
	I0914 18:44:55.473490  498519 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 18:44:55.473520  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:55.498503  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:55.736423  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:55.741006  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:55.856222  498519 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.871184631s)
	I0914 18:44:55.856318  498519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.904485691s)
	I0914 18:44:55.859074  498519 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 18:44:55.861051  498519 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 18:44:55.863093  498519 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 18:44:55.863121  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 18:44:55.899315  498519 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 18:44:55.899340  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 18:44:55.949105  498519 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 18:44:55.949135  498519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 18:44:55.986041  498519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 18:44:56.009567  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:56.238061  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:56.242797  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:56.505665  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:56.743890  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:56.750290  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:56.867519  498519 addons.go:467] Verifying addon gcp-auth=true in "addons-531284"
	I0914 18:44:56.870393  498519 out.go:177] * Verifying gcp-auth addon...
	I0914 18:44:56.875329  498519 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 18:44:56.884265  498519 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 18:44:56.884289  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:56.892736  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:57.006936  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:57.254412  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:57.255796  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:57.397270  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:57.504353  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:57.738042  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:57.741026  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:57.896896  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:57.939360  498519 pod_ready.go:102] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"False"
	I0914 18:44:58.007182  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:58.245439  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:58.256541  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:58.396836  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:58.505219  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:58.739008  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:58.742615  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:58.897483  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:59.006342  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:59.241080  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:59.243528  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:59.398339  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:44:59.504314  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:44:59.736925  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:44:59.742112  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:44:59.897303  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:00.006472  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:00.240791  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:00.249198  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:00.402520  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:00.441229  498519 pod_ready.go:102] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"False"
	I0914 18:45:00.506663  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:00.745275  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:00.746760  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:00.898845  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:01.007169  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:01.249852  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:01.251135  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:01.396527  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:01.438812  498519 pod_ready.go:92] pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace has status "Ready":"True"
	I0914 18:45:01.438842  498519 pod_ready.go:81] duration metric: took 14.543551465s waiting for pod "coredns-5dd5756b68-6z9qf" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.438855  498519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xzv7w" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.441270  498519 pod_ready.go:97] error getting pod "coredns-5dd5756b68-xzv7w" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-xzv7w" not found
	I0914 18:45:01.441297  498519 pod_ready.go:81] duration metric: took 2.434114ms waiting for pod "coredns-5dd5756b68-xzv7w" in "kube-system" namespace to be "Ready" ...
	E0914 18:45:01.441308  498519 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-xzv7w" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-xzv7w" not found
	I0914 18:45:01.441316  498519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.446980  498519 pod_ready.go:92] pod "etcd-addons-531284" in "kube-system" namespace has status "Ready":"True"
	I0914 18:45:01.447008  498519 pod_ready.go:81] duration metric: took 5.683893ms waiting for pod "etcd-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.447023  498519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.454068  498519 pod_ready.go:92] pod "kube-apiserver-addons-531284" in "kube-system" namespace has status "Ready":"True"
	I0914 18:45:01.454140  498519 pod_ready.go:81] duration metric: took 7.098961ms waiting for pod "kube-apiserver-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.454165  498519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.461134  498519 pod_ready.go:92] pod "kube-controller-manager-addons-531284" in "kube-system" namespace has status "Ready":"True"
	I0914 18:45:01.461219  498519 pod_ready.go:81] duration metric: took 7.031834ms waiting for pod "kube-controller-manager-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.461247  498519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55vpq" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.505305  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:01.636759  498519 pod_ready.go:92] pod "kube-proxy-55vpq" in "kube-system" namespace has status "Ready":"True"
	I0914 18:45:01.636786  498519 pod_ready.go:81] duration metric: took 175.519857ms waiting for pod "kube-proxy-55vpq" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.636798  498519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:01.739337  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:01.740626  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:01.896982  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:02.016062  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:02.036693  498519 pod_ready.go:92] pod "kube-scheduler-addons-531284" in "kube-system" namespace has status "Ready":"True"
	I0914 18:45:02.036719  498519 pod_ready.go:81] duration metric: took 399.911285ms waiting for pod "kube-scheduler-addons-531284" in "kube-system" namespace to be "Ready" ...
	I0914 18:45:02.036730  498519 pod_ready.go:38] duration metric: took 15.175367063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:45:02.036748  498519 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:45:02.036808  498519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:45:02.053487  498519 api_server.go:72] duration metric: took 15.472804316s to wait for apiserver process to appear ...
	I0914 18:45:02.053513  498519 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:45:02.053531  498519 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 18:45:02.062602  498519 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 18:45:02.064024  498519 api_server.go:141] control plane version: v1.28.1
	I0914 18:45:02.064049  498519 api_server.go:131] duration metric: took 10.529786ms to wait for apiserver health ...
	I0914 18:45:02.064058  498519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:45:02.247663  498519 system_pods.go:59] 17 kube-system pods found
	I0914 18:45:02.247705  498519 system_pods.go:61] "coredns-5dd5756b68-6z9qf" [a55a0d6a-c5ca-42db-a514-fa07b193e25c] Running
	I0914 18:45:02.247715  498519 system_pods.go:61] "csi-hostpath-attacher-0" [11176d22-dad9-4291-9d3e-a87c099f1408] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 18:45:02.247724  498519 system_pods.go:61] "csi-hostpath-resizer-0" [69159c91-b831-4642-a0be-93e75350b8e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 18:45:02.247732  498519 system_pods.go:61] "csi-hostpathplugin-6dnfc" [0ac05f4f-613f-4af5-bcc8-cbfab980c48d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 18:45:02.247744  498519 system_pods.go:61] "etcd-addons-531284" [c9e3e4b7-8d82-400a-886f-3fdc736a2c9f] Running
	I0914 18:45:02.247750  498519 system_pods.go:61] "kindnet-2vmqd" [dfce2081-340d-457b-aa80-03d005d5dded] Running
	I0914 18:45:02.247755  498519 system_pods.go:61] "kube-apiserver-addons-531284" [c1de290f-6154-4810-ac1e-e3cb320e647f] Running
	I0914 18:45:02.247760  498519 system_pods.go:61] "kube-controller-manager-addons-531284" [2640a2ea-4148-4402-8a12-3ee45ebc851d] Running
	I0914 18:45:02.247769  498519 system_pods.go:61] "kube-ingress-dns-minikube" [7bbfa2a2-10aa-4f0f-8009-e0cd04f51f6c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0914 18:45:02.247774  498519 system_pods.go:61] "kube-proxy-55vpq" [d5d878c2-3c34-4b3e-b429-f44620c955ab] Running
	I0914 18:45:02.247780  498519 system_pods.go:61] "kube-scheduler-addons-531284" [3263487b-16e0-4924-a423-44b127e6716a] Running
	I0914 18:45:02.247788  498519 system_pods.go:61] "metrics-server-7c66d45ddc-9zr98" [7582499e-b5c4-46d2-b45e-b3d4b0c91902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:45:02.247794  498519 system_pods.go:61] "registry-proxy-4vs8q" [7cadbedb-cc49-4ca9-9df2-fd1e7019f21e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 18:45:02.247800  498519 system_pods.go:61] "registry-shj4x" [0364cf43-d68c-44d4-8ef3-8a69c44bd62b] Running
	I0914 18:45:02.247807  498519 system_pods.go:61] "snapshot-controller-58dbcc7b99-6vxvm" [ac0cb08a-7c93-4348-a21f-58936db83cad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 18:45:02.247815  498519 system_pods.go:61] "snapshot-controller-58dbcc7b99-nbx66" [5db7f7d2-d2e7-4f50-9141-9b5845f5bf67] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 18:45:02.247820  498519 system_pods.go:61] "storage-provisioner" [ce2c29da-f82d-4860-aa17-2b091defa8fd] Running
	I0914 18:45:02.247826  498519 system_pods.go:74] duration metric: took 183.762065ms to wait for pod list to return data ...
	I0914 18:45:02.247837  498519 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:45:02.249187  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:02.249304  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:02.397213  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:02.436244  498519 default_sa.go:45] found service account: "default"
	I0914 18:45:02.436312  498519 default_sa.go:55] duration metric: took 188.467412ms for default service account to be created ...
	I0914 18:45:02.436337  498519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:45:02.505111  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:02.644213  498519 system_pods.go:86] 17 kube-system pods found
	I0914 18:45:02.644304  498519 system_pods.go:89] "coredns-5dd5756b68-6z9qf" [a55a0d6a-c5ca-42db-a514-fa07b193e25c] Running
	I0914 18:45:02.644334  498519 system_pods.go:89] "csi-hostpath-attacher-0" [11176d22-dad9-4291-9d3e-a87c099f1408] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 18:45:02.644370  498519 system_pods.go:89] "csi-hostpath-resizer-0" [69159c91-b831-4642-a0be-93e75350b8e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 18:45:02.644397  498519 system_pods.go:89] "csi-hostpathplugin-6dnfc" [0ac05f4f-613f-4af5-bcc8-cbfab980c48d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 18:45:02.644417  498519 system_pods.go:89] "etcd-addons-531284" [c9e3e4b7-8d82-400a-886f-3fdc736a2c9f] Running
	I0914 18:45:02.644440  498519 system_pods.go:89] "kindnet-2vmqd" [dfce2081-340d-457b-aa80-03d005d5dded] Running
	I0914 18:45:02.644477  498519 system_pods.go:89] "kube-apiserver-addons-531284" [c1de290f-6154-4810-ac1e-e3cb320e647f] Running
	I0914 18:45:02.644502  498519 system_pods.go:89] "kube-controller-manager-addons-531284" [2640a2ea-4148-4402-8a12-3ee45ebc851d] Running
	I0914 18:45:02.644525  498519 system_pods.go:89] "kube-ingress-dns-minikube" [7bbfa2a2-10aa-4f0f-8009-e0cd04f51f6c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0914 18:45:02.644545  498519 system_pods.go:89] "kube-proxy-55vpq" [d5d878c2-3c34-4b3e-b429-f44620c955ab] Running
	I0914 18:45:02.644575  498519 system_pods.go:89] "kube-scheduler-addons-531284" [3263487b-16e0-4924-a423-44b127e6716a] Running
	I0914 18:45:02.644627  498519 system_pods.go:89] "metrics-server-7c66d45ddc-9zr98" [7582499e-b5c4-46d2-b45e-b3d4b0c91902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:45:02.644650  498519 system_pods.go:89] "registry-proxy-4vs8q" [7cadbedb-cc49-4ca9-9df2-fd1e7019f21e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 18:45:02.644667  498519 system_pods.go:89] "registry-shj4x" [0364cf43-d68c-44d4-8ef3-8a69c44bd62b] Running
	I0914 18:45:02.644700  498519 system_pods.go:89] "snapshot-controller-58dbcc7b99-6vxvm" [ac0cb08a-7c93-4348-a21f-58936db83cad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 18:45:02.644727  498519 system_pods.go:89] "snapshot-controller-58dbcc7b99-nbx66" [5db7f7d2-d2e7-4f50-9141-9b5845f5bf67] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 18:45:02.644746  498519 system_pods.go:89] "storage-provisioner" [ce2c29da-f82d-4860-aa17-2b091defa8fd] Running
	I0914 18:45:02.644768  498519 system_pods.go:126] duration metric: took 208.41422ms to wait for k8s-apps to be running ...
	I0914 18:45:02.644798  498519 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:45:02.644871  498519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:45:02.661404  498519 system_svc.go:56] duration metric: took 16.607469ms WaitForService to wait for kubelet.
	I0914 18:45:02.661479  498519 kubeadm.go:581] duration metric: took 16.080802418s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 18:45:02.661537  498519 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:45:02.738366  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:02.743202  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:02.836865  498519 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 18:45:02.836943  498519 node_conditions.go:123] node cpu capacity is 2
	I0914 18:45:02.836969  498519 node_conditions.go:105] duration metric: took 175.415003ms to run NodePressure ...
	I0914 18:45:02.836995  498519 start.go:228] waiting for startup goroutines ...
	I0914 18:45:02.897321  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:03.008888  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:03.237887  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:03.243100  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:03.397669  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:03.505702  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:03.739242  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:03.756199  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:03.898523  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:04.011586  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:04.239039  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:04.242773  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:04.397256  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:04.504278  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:04.736481  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:04.741967  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:04.896787  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:05.020410  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:05.238096  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:05.243103  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:05.397211  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:05.506334  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:05.738626  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:05.741284  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:05.897977  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:06.005865  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:06.236354  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:06.240855  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:06.396727  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:06.508377  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:06.736947  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:06.742054  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:06.897128  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:07.004826  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:07.238119  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:07.244568  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:07.396852  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:07.505157  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:07.736784  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:07.741597  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:07.896886  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:08.011487  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:08.236868  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:08.243420  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 18:45:08.397188  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:08.504936  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:08.736892  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:08.741352  498519 kapi.go:107] duration metric: took 15.017467754s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 18:45:08.896745  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:09.006187  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:09.237593  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:09.396320  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:09.504804  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:09.737702  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:09.897243  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:10.016408  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:10.236723  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:10.396311  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:10.505951  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:10.738355  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:10.898475  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:11.014245  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:11.237423  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:11.409158  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:11.504948  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:11.737809  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:11.896561  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:12.018196  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:12.243295  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:12.396283  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:12.505866  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:12.736886  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:12.896479  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:13.012413  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:13.239357  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:13.399491  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:13.505288  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:13.737239  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:13.897179  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:14.008972  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:14.238509  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:14.398048  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:14.505072  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:14.740712  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:14.897413  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:15.007041  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:15.237237  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:15.397249  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:15.507252  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:15.737327  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:15.897539  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:16.005856  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:16.237410  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:16.396436  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:16.504337  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:16.737228  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:16.897205  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:17.005812  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:17.236903  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:17.396996  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:17.505679  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:17.737516  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:17.897357  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:18.010048  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:18.236557  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:18.396194  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:18.506390  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:18.737370  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:18.897201  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:19.008137  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:19.236824  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:19.396818  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:19.505253  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:19.738849  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:19.897238  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:20.016058  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:20.236940  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:20.399059  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:20.505463  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:20.736771  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:20.905920  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:21.007406  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:21.236916  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:21.396714  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:21.512480  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:21.736486  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:21.897366  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:22.007523  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:22.239797  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:22.397088  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:22.505180  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:22.737012  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:22.896778  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:23.012135  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:23.236903  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:23.397034  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:23.512906  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:23.736677  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:23.897440  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:24.034664  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:24.237640  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:24.397544  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:24.506340  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:24.736971  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:24.896699  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:25.006915  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:25.237559  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:25.396458  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:25.506041  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:25.736922  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:25.896971  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:26.012725  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:26.236846  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:26.396787  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:26.504576  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:26.739262  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:26.897447  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:27.006430  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:27.240712  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:27.401948  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:27.505017  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:27.736842  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:27.897343  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:28.022543  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:28.237222  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:28.397202  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:28.504230  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:28.736844  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:28.896988  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:29.006491  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:29.237625  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:29.396832  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:29.512194  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:29.738072  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:29.896824  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:30.006635  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:30.237559  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:30.396330  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:30.505090  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:30.736907  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:30.896835  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:31.005202  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:31.236542  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:31.397440  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:31.504751  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:31.737065  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:31.896917  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:32.006521  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:32.237365  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:32.397207  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:32.506585  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:32.737185  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:32.900373  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:33.015350  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:33.237233  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:33.397403  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:33.504490  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:33.737259  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:33.897126  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:34.008004  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:34.237195  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:34.397496  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:34.504811  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:34.737566  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:34.897144  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:35.006085  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:35.237960  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:35.397350  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:35.505407  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:35.737084  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:35.896844  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:36.009029  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:36.251265  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:36.400713  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:36.506093  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:36.736474  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:36.897265  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:37.006110  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:37.236979  498519 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 18:45:37.396659  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:37.504968  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:37.738041  498519 kapi.go:107] duration metric: took 44.028296233s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 18:45:37.900110  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:38.011315  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:38.396735  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:38.505859  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:38.897573  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:39.007452  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:39.396688  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:39.505280  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:39.897285  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:40.021983  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:40.397447  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:40.505213  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:40.897755  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:41.006327  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:41.397637  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:41.504664  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:41.897401  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:42.016213  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:42.396716  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:42.504383  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:42.897099  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:43.015547  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:43.398353  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:43.504715  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:43.897176  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:44.006092  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:44.397514  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:44.505153  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:44.896844  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:45.014271  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:45.396378  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:45.506195  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:45.897571  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:46.005415  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:46.396782  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:46.504643  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 18:45:46.896673  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:47.006985  498519 kapi.go:107] duration metric: took 51.546348124s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 18:45:47.396328  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:47.896540  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:48.396880  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:48.897251  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:49.397129  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:49.897624  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:50.396979  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:50.896452  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:51.396998  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:51.897561  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:52.396990  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:52.897497  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:53.396302  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:53.896936  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:54.396468  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:54.897178  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:55.397537  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:55.897048  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:56.397369  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:56.896477  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:57.396693  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:57.897465  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:58.396475  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:58.896516  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:59.396368  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:45:59.896725  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:00.396714  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:00.896900  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:01.396983  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:01.896965  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:02.397526  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:02.897396  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:03.398509  498519 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 18:46:03.896668  498519 kapi.go:107] duration metric: took 1m7.021340587s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 18:46:03.899044  498519 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-531284 cluster.
	I0914 18:46:03.901224  498519 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 18:46:03.903370  498519 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 18:46:03.905650  498519 out.go:177] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0914 18:46:03.907589  498519 addons.go:502] enable addons completed in 1m17.529809579s: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner inspektor-gadget metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0914 18:46:03.907634  498519 start.go:233] waiting for cluster config update ...
	I0914 18:46:03.907652  498519 start.go:242] writing updated cluster config ...
	I0914 18:46:03.907964  498519 ssh_runner.go:195] Run: rm -f paused
	I0914 18:46:04.099086  498519 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 18:46:04.101463  498519 out.go:177] * Done! kubectl is now configured to use "addons-531284" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7ae1bc67ccdfb       a39a074194753       8 seconds ago        Exited              hello-world-app           2                   e177e07833e38       hello-world-app-5d77478584-8479n
	20b5491cdd342       fa0c6bb795403       34 seconds ago       Running             nginx                     0                   8fd970af2de3b       nginx
	599d029ec9dc1       71e15c1ff4390       About a minute ago   Running             headlamp                  0                   c8eefa787c8e4       headlamp-699c48fb74-dw57t
	b16e4b2fcaa7e       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   f7e53cacc8c21       gcp-auth-d4c87556c-rz8g8
	c23794a5ceaee       8f2588812ab29       About a minute ago   Exited              patch                     0                   952cb0f74578f       ingress-nginx-admission-patch-cgn92
	b1f96a22daf1d       8f2588812ab29       About a minute ago   Exited              create                    0                   355348e9dca11       ingress-nginx-admission-create-46mcl
	4a3f3a86c3cc0       97e04611ad434       2 minutes ago        Running             coredns                   0                   935e255b5f8f9       coredns-5dd5756b68-6z9qf
	a58efa493d2e4       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   3f94f16eec9c4       storage-provisioner
	04ddea0e0e160       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni               0                   e320b0199d966       kindnet-2vmqd
	c5025df7d8310       812f5241df7fd       2 minutes ago        Running             kube-proxy                0                   2997915f1126a       kube-proxy-55vpq
	df257effa992a       b29fb62480892       2 minutes ago        Running             kube-apiserver            0                   46bf1bde3cad5       kube-apiserver-addons-531284
	e99958e287ee1       b4a5a57e99492       2 minutes ago        Running             kube-scheduler            0                   f4eb26afe7713       kube-scheduler-addons-531284
	929f21054362a       8b6e1980b7584       2 minutes ago        Running             kube-controller-manager   0                   ebb12341d4188       kube-controller-manager-addons-531284
	2314f2898ab5c       9cdd6470f48c8       2 minutes ago        Running             etcd                      0                   a388bf30df084       etcd-addons-531284
	
	* 
	* ==> containerd <==
	* Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.880162309Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:47:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9395 runtime=io.containerd.runc.v2\n"
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.884025997Z" level=info msg="StopContainer for \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\" returns successfully"
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.885005798Z" level=info msg="StopPodSandbox for \"110b21b1bf42e8e9d98db7f60c46cb2e88aded51918ae6fbe1451930e41604cd\""
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.885171337Z" level=info msg="Container to stop \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.885620503Z" level=info msg="StopContainer for \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\" returns successfully"
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.888159322Z" level=info msg="StopPodSandbox for \"5faa8341baa83dab414ea762c7015dd282efbe7ae8d79582e669559ac14686d4\""
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.888315753Z" level=info msg="Container to stop \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.982234614Z" level=info msg="shim disconnected" id=110b21b1bf42e8e9d98db7f60c46cb2e88aded51918ae6fbe1451930e41604cd
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.982444682Z" level=warning msg="cleaning up after shim disconnected" id=110b21b1bf42e8e9d98db7f60c46cb2e88aded51918ae6fbe1451930e41604cd namespace=k8s.io
	Sep 14 18:47:14 addons-531284 containerd[746]: time="2023-09-14T18:47:14.982525199Z" level=info msg="cleaning up dead shim"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.033819051Z" level=info msg="shim disconnected" id=5faa8341baa83dab414ea762c7015dd282efbe7ae8d79582e669559ac14686d4
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.034660103Z" level=warning msg="cleaning up after shim disconnected" id=5faa8341baa83dab414ea762c7015dd282efbe7ae8d79582e669559ac14686d4 namespace=k8s.io
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.034803258Z" level=info msg="cleaning up dead shim"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.042793068Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:47:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9459 runtime=io.containerd.runc.v2\n"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.074439961Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:47:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9473 runtime=io.containerd.runc.v2\n"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.088816492Z" level=info msg="TearDown network for sandbox \"110b21b1bf42e8e9d98db7f60c46cb2e88aded51918ae6fbe1451930e41604cd\" successfully"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.089032214Z" level=info msg="StopPodSandbox for \"110b21b1bf42e8e9d98db7f60c46cb2e88aded51918ae6fbe1451930e41604cd\" returns successfully"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.115674592Z" level=info msg="TearDown network for sandbox \"5faa8341baa83dab414ea762c7015dd282efbe7ae8d79582e669559ac14686d4\" successfully"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.115889706Z" level=info msg="StopPodSandbox for \"5faa8341baa83dab414ea762c7015dd282efbe7ae8d79582e669559ac14686d4\" returns successfully"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.239118946Z" level=info msg="RemoveContainer for \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\""
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.248780989Z" level=info msg="RemoveContainer for \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\" returns successfully"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.252691414Z" level=error msg="ContainerStatus for \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\": not found"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.254765207Z" level=info msg="RemoveContainer for \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\""
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.262568506Z" level=info msg="RemoveContainer for \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\" returns successfully"
	Sep 14 18:47:15 addons-531284 containerd[746]: time="2023-09-14T18:47:15.266115746Z" level=error msg="ContainerStatus for \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\": not found"
	
	* 
	* ==> coredns [4a3f3a86c3cc000b28f87979d7d52b27bd4b08830f5490029efdf9f8730005c7] <==
	* [INFO] 10.244.0.16:60856 - 42160 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040928s
	[INFO] 10.244.0.16:60856 - 51636 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001961211s
	[INFO] 10.244.0.16:36231 - 38349 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001600618s
	[INFO] 10.244.0.16:36231 - 40792 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001780484s
	[INFO] 10.244.0.16:60856 - 63437 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001489332s
	[INFO] 10.244.0.16:60856 - 42176 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000121428s
	[INFO] 10.244.0.16:36231 - 25859 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088566s
	[INFO] 10.244.0.16:58592 - 59012 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00012375s
	[INFO] 10.244.0.16:43106 - 18412 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000255123s
	[INFO] 10.244.0.16:58592 - 308 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101588s
	[INFO] 10.244.0.16:43106 - 30747 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000142638s
	[INFO] 10.244.0.16:58592 - 56779 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000214827s
	[INFO] 10.244.0.16:43106 - 2492 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080247s
	[INFO] 10.244.0.16:58592 - 30241 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086047s
	[INFO] 10.244.0.16:43106 - 36126 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082166s
	[INFO] 10.244.0.16:58592 - 48336 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008992s
	[INFO] 10.244.0.16:43106 - 31958 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043749s
	[INFO] 10.244.0.16:43106 - 42648 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092964s
	[INFO] 10.244.0.16:58592 - 51150 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000210003s
	[INFO] 10.244.0.16:43106 - 6218 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001273923s
	[INFO] 10.244.0.16:58592 - 46970 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00138274s
	[INFO] 10.244.0.16:58592 - 61154 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001052859s
	[INFO] 10.244.0.16:43106 - 37011 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001152445s
	[INFO] 10.244.0.16:58592 - 12589 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071795s
	[INFO] 10.244.0.16:43106 - 12545 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00014487s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-531284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-531284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b
	                    minikube.k8s.io/name=addons-531284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T18_44_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-531284
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 18:44:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-531284
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 18:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 18:47:06 +0000   Thu, 14 Sep 2023 18:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 18:47:06 +0000   Thu, 14 Sep 2023 18:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 18:47:06 +0000   Thu, 14 Sep 2023 18:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 18:47:06 +0000   Thu, 14 Sep 2023 18:44:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-531284
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 02923931eae24cfcad27c290cc12b0bc
	  System UUID:                a46ed6c7-50a0-4d72-8ba6-1e030fdb9660
	  Boot ID:                    5482c722-bf9c-42ea-8010-6373e20f2ddd
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.22
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-8479n         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-d4c87556c-rz8g8                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  headlamp                    headlamp-699c48fb74-dw57t                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 coredns-5dd5756b68-6z9qf                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m30s
	  kube-system                 etcd-addons-531284                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m42s
	  kube-system                 kindnet-2vmqd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m31s
	  kube-system                 kube-apiserver-addons-531284             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-controller-manager-addons-531284    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-proxy-55vpq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-addons-531284             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m51s (x8 over 2m51s)  kubelet          Node addons-531284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x8 over 2m51s)  kubelet          Node addons-531284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x7 over 2m51s)  kubelet          Node addons-531284 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m43s                  kubelet          Node addons-531284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s                  kubelet          Node addons-531284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s                  kubelet          Node addons-531284 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m43s                  kubelet          Node addons-531284 status is now: NodeNotReady
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m42s                  kubelet          Node addons-531284 status is now: NodeReady
	  Normal  RegisteredNode           2m31s                  node-controller  Node addons-531284 event: Registered Node addons-531284 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000738] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001019] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000e6d70ae1
	[  +0.001093] FS-Cache: N-key=[8] '943a5c0100000000'
	[  +0.020369] FS-Cache: Duplicate cookie detected
	[  +0.000880] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000007cc3b60a
	[  +0.001158] FS-Cache: O-key=[8] '943a5c0100000000'
	[  +0.000773] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001149] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=000000009e94c0ae
	[  +0.001230] FS-Cache: N-key=[8] '943a5c0100000000'
	[  +2.856088] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000000e997ae2
	[  +0.001081] FS-Cache: O-key=[8] '933a5c0100000000'
	[  +0.000711] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000e6d70ae1
	[  +0.001069] FS-Cache: N-key=[8] '933a5c0100000000'
	[  +0.399166] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=0000000095ffa149
	[  +0.001108] FS-Cache: O-key=[8] '993a5c0100000000'
	[  +0.000739] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=0000000080474072
	[  +0.001122] FS-Cache: N-key=[8] '993a5c0100000000'
	[ +10.571489] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [2314f2898ab5cd1717e24974b9c0a89377aada9e59a361dabd44dc6183395c0e] <==
	* {"level":"info","ts":"2023-09-14T18:44:26.375621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-09-14T18:44:26.375701Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-09-14T18:44:26.37599Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T18:44:26.376252Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T18:44:26.376276Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T18:44:26.376305Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-09-14T18:44:26.376316Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-09-14T18:44:27.252628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T18:44:27.252734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T18:44:27.25277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-09-14T18:44:27.252825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T18:44:27.252863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-09-14T18:44:27.252975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-09-14T18:44:27.253062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-09-14T18:44:27.25676Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-531284 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T18:44:27.256869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T18:44:27.258166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-09-14T18:44:27.256933Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:44:27.262591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T18:44:27.262695Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T18:44:27.256957Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T18:44:27.263362Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:44:27.26356Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:44:27.263665Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:44:27.269428Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> gcp-auth [b16e4b2fcaa7e28bba1bb2926c0016c8e3587c9227f5dc9cc32f02e800badd27] <==
	* 2023/09/14 18:46:03 GCP Auth Webhook started!
	2023/09/14 18:46:11 Ready to marshal response ...
	2023/09/14 18:46:11 Ready to write response ...
	2023/09/14 18:46:11 Ready to marshal response ...
	2023/09/14 18:46:11 Ready to write response ...
	2023/09/14 18:46:11 Ready to marshal response ...
	2023/09/14 18:46:11 Ready to write response ...
	2023/09/14 18:46:14 Ready to marshal response ...
	2023/09/14 18:46:14 Ready to write response ...
	2023/09/14 18:46:25 Ready to marshal response ...
	2023/09/14 18:46:25 Ready to write response ...
	2023/09/14 18:46:39 Ready to marshal response ...
	2023/09/14 18:46:39 Ready to write response ...
	2023/09/14 18:46:49 Ready to marshal response ...
	2023/09/14 18:46:49 Ready to write response ...
	2023/09/14 18:46:57 Ready to marshal response ...
	2023/09/14 18:46:57 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:47:16 up  4:29,  0 users,  load average: 2.61, 1.35, 1.11
	Linux addons-531284 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [04ddea0e0e1603140a79f197b296542c1ce633c2ebae27fb9865edda0a35d1da] <==
	* I0914 18:45:09.752542       1 main.go:227] handling current node
	I0914 18:45:19.768394       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:45:19.768419       1 main.go:227] handling current node
	I0914 18:45:29.780795       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:45:29.781204       1 main.go:227] handling current node
	I0914 18:45:39.795879       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:45:39.795908       1 main.go:227] handling current node
	I0914 18:45:49.799991       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:45:49.800019       1 main.go:227] handling current node
	I0914 18:45:59.812357       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:45:59.812383       1 main.go:227] handling current node
	I0914 18:46:09.829810       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:46:09.829839       1 main.go:227] handling current node
	I0914 18:46:19.840562       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:46:19.840616       1 main.go:227] handling current node
	I0914 18:46:29.844311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:46:29.844350       1 main.go:227] handling current node
	I0914 18:46:39.858111       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:46:39.858146       1 main.go:227] handling current node
	I0914 18:46:49.928845       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:46:49.928876       1 main.go:227] handling current node
	I0914 18:46:59.940341       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:46:59.940374       1 main.go:227] handling current node
	I0914 18:47:09.945312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:47:09.945340       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [df257effa992a7af8697659ab8ac16dab647e684a7832df0717f75a6639a87dc] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 18:47:13.282403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:47:14.428519       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.428635       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.443197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.443260       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.464532       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.465558       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.478055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.478100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.509859       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.509916       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.516432       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.517254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.547142       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.552266       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 18:47:14.571663       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 18:47:14.574239       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0914 18:47:14.598855       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 18:47:14.598954       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 18:47:14.603820       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 18:47:14.604760       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0914 18:47:15.478270       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 18:47:15.572368       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 18:47:15.587686       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [929f21054362abc700294f9b638366371e60b3c37a2b8d04e186f5eef7baa6e8] <==
	* I0914 18:46:51.088668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.792979ms"
	I0914 18:46:51.089567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.769µs"
	I0914 18:46:52.084899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.539161ms"
	I0914 18:46:52.085716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.862µs"
	I0914 18:46:53.081388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="76.521µs"
	W0914 18:46:53.198820       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 18:46:53.198874       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0914 18:46:54.083236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.35µs"
	I0914 18:46:55.829733       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0914 18:47:07.171180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="7.705µs"
	I0914 18:47:07.171541       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0914 18:47:07.195159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.015µs"
	I0914 18:47:07.205945       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0914 18:47:07.772278       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0914 18:47:07.862284       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0914 18:47:11.391731       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 18:47:11.391765       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0914 18:47:14.642041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="4.85µs"
	I0914 18:47:15.456997       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0914 18:47:15.457037       1 shared_informer.go:318] Caches are synced for resource quota
	E0914 18:47:15.480922       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 18:47:15.575014       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 18:47:15.590842       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0914 18:47:15.931505       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0914 18:47:15.931550       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [c5025df7d8310a881042d3f5b1aeb05b2b8804dbec813c03b3fab36460896e35] <==
	* I0914 18:44:47.348244       1 server_others.go:69] "Using iptables proxy"
	I0914 18:44:47.369783       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0914 18:44:47.421089       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 18:44:47.423638       1 server_others.go:152] "Using iptables Proxier"
	I0914 18:44:47.423707       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 18:44:47.423720       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 18:44:47.423791       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 18:44:47.430788       1 server.go:846] "Version info" version="v1.28.1"
	I0914 18:44:47.430813       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:44:47.447391       1 config.go:188] "Starting service config controller"
	I0914 18:44:47.447439       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 18:44:47.447464       1 config.go:97] "Starting endpoint slice config controller"
	I0914 18:44:47.447468       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 18:44:47.447896       1 config.go:315] "Starting node config controller"
	I0914 18:44:47.447905       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 18:44:47.548411       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 18:44:47.548481       1 shared_informer.go:318] Caches are synced for service config
	I0914 18:44:47.548375       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e99958e287ee1d02e39aba89753602428e0bc2769086cd181867169b058a4104] <==
	* W0914 18:44:30.423163       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:44:30.423428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 18:44:30.423120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:44:30.423551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 18:44:30.423093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 18:44:30.423570       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0914 18:44:31.264125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:44:31.264167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 18:44:31.410874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:44:31.411156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 18:44:31.442205       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:44:31.442460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 18:44:31.487102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:44:31.487150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 18:44:31.573343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:44:31.573691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 18:44:31.573645       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:44:31.573937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 18:44:31.575547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:44:31.575580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 18:44:31.626180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:44:31.626397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 18:44:31.704237       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:44:31.704458       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0914 18:44:33.407945       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 14 18:47:10 addons-531284 kubelet[1353]: I0914 18:47:10.607089    1353 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156b344d-a01e-4e88-bce7-a00dcb5b12aa-kube-api-access-tcsvs" (OuterVolumeSpecName: "kube-api-access-tcsvs") pod "156b344d-a01e-4e88-bce7-a00dcb5b12aa" (UID: "156b344d-a01e-4e88-bce7-a00dcb5b12aa"). InnerVolumeSpecName "kube-api-access-tcsvs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 18:47:10 addons-531284 kubelet[1353]: I0914 18:47:10.607105    1353 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156b344d-a01e-4e88-bce7-a00dcb5b12aa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "156b344d-a01e-4e88-bce7-a00dcb5b12aa" (UID: "156b344d-a01e-4e88-bce7-a00dcb5b12aa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 18:47:10 addons-531284 kubelet[1353]: I0914 18:47:10.703386    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tcsvs\" (UniqueName: \"kubernetes.io/projected/156b344d-a01e-4e88-bce7-a00dcb5b12aa-kube-api-access-tcsvs\") on node \"addons-531284\" DevicePath \"\""
	Sep 14 18:47:10 addons-531284 kubelet[1353]: I0914 18:47:10.703433    1353 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/156b344d-a01e-4e88-bce7-a00dcb5b12aa-webhook-cert\") on node \"addons-531284\" DevicePath \"\""
	Sep 14 18:47:11 addons-531284 kubelet[1353]: I0914 18:47:11.216246    1353 scope.go:117] "RemoveContainer" containerID="259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f"
	Sep 14 18:47:11 addons-531284 kubelet[1353]: I0914 18:47:11.223468    1353 scope.go:117] "RemoveContainer" containerID="259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f"
	Sep 14 18:47:11 addons-531284 kubelet[1353]: E0914 18:47:11.224009    1353 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f\": not found" containerID="259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f"
	Sep 14 18:47:11 addons-531284 kubelet[1353]: I0914 18:47:11.224059    1353 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f"} err="failed to get container status \"259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f\": rpc error: code = NotFound desc = an error occurred when try to find container \"259aef53aae0f32cd96f89445d0adf42815afc01eb4f5c8be6b3beb78839d92f\": not found"
	Sep 14 18:47:11 addons-531284 kubelet[1353]: I0914 18:47:11.781251    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="156b344d-a01e-4e88-bce7-a00dcb5b12aa" path="/var/lib/kubelet/pods/156b344d-a01e-4e88-bce7-a00dcb5b12aa/volumes"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.231556    1353 scope.go:117] "RemoveContainer" containerID="c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.246600    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmzdd\" (UniqueName: \"kubernetes.io/projected/5db7f7d2-d2e7-4f50-9141-9b5845f5bf67-kube-api-access-rmzdd\") pod \"5db7f7d2-d2e7-4f50-9141-9b5845f5bf67\" (UID: \"5db7f7d2-d2e7-4f50-9141-9b5845f5bf67\") "
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.246667    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ql85\" (UniqueName: \"kubernetes.io/projected/ac0cb08a-7c93-4348-a21f-58936db83cad-kube-api-access-8ql85\") pod \"ac0cb08a-7c93-4348-a21f-58936db83cad\" (UID: \"ac0cb08a-7c93-4348-a21f-58936db83cad\") "
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.252219    1353 scope.go:117] "RemoveContainer" containerID="c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: E0914 18:47:15.252926    1353 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\": not found" containerID="c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.252964    1353 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2"} err="failed to get container status \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5315539b0bf0ed901072e80f68393c447b6fcc90ef221d643468f6399ac7ab2\": not found"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.252977    1353 scope.go:117] "RemoveContainer" containerID="10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.261745    1353 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0cb08a-7c93-4348-a21f-58936db83cad-kube-api-access-8ql85" (OuterVolumeSpecName: "kube-api-access-8ql85") pod "ac0cb08a-7c93-4348-a21f-58936db83cad" (UID: "ac0cb08a-7c93-4348-a21f-58936db83cad"). InnerVolumeSpecName "kube-api-access-8ql85". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.264380    1353 scope.go:117] "RemoveContainer" containerID="10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: E0914 18:47:15.266429    1353 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\": not found" containerID="10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.266507    1353 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992"} err="failed to get container status \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\": rpc error: code = NotFound desc = an error occurred when try to find container \"10294ca0f89e79c5963b86fc958ce6f173319d2715b3314fcb1b55b3e897b992\": not found"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.280905    1353 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db7f7d2-d2e7-4f50-9141-9b5845f5bf67-kube-api-access-rmzdd" (OuterVolumeSpecName: "kube-api-access-rmzdd") pod "5db7f7d2-d2e7-4f50-9141-9b5845f5bf67" (UID: "5db7f7d2-d2e7-4f50-9141-9b5845f5bf67"). InnerVolumeSpecName "kube-api-access-rmzdd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.347809    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rmzdd\" (UniqueName: \"kubernetes.io/projected/5db7f7d2-d2e7-4f50-9141-9b5845f5bf67-kube-api-access-rmzdd\") on node \"addons-531284\" DevicePath \"\""
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.347851    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8ql85\" (UniqueName: \"kubernetes.io/projected/ac0cb08a-7c93-4348-a21f-58936db83cad-kube-api-access-8ql85\") on node \"addons-531284\" DevicePath \"\""
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.783584    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5db7f7d2-d2e7-4f50-9141-9b5845f5bf67" path="/var/lib/kubelet/pods/5db7f7d2-d2e7-4f50-9141-9b5845f5bf67/volumes"
	Sep 14 18:47:15 addons-531284 kubelet[1353]: I0914 18:47:15.784048    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ac0cb08a-7c93-4348-a21f-58936db83cad" path="/var/lib/kubelet/pods/ac0cb08a-7c93-4348-a21f-58936db83cad/volumes"
	
	* 
	* ==> storage-provisioner [a58efa493d2e4d571277a4ef2e2a4dd0c473d26dd30f6976783bb4d600d0a58d] <==
	* I0914 18:44:52.589342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:44:52.603086       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:44:52.603191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:44:52.614573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:44:52.616613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-531284_30a1c06b-1c06-45db-b6b4-d1870bcc53c1!
	I0914 18:44:52.616816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15517ecb-044a-4899-a7f9-4dfdb797cb31", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-531284_30a1c06b-1c06-45db-b6b4-d1870bcc53c1 became leader
	I0914 18:44:52.717493       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-531284_30a1c06b-1c06-45db-b6b4-d1870bcc53c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-531284 -n addons-531284
helpers_test.go:261: (dbg) Run:  kubectl --context addons-531284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.35s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (18.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-759345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-759345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (15.700437342s)

                                                
                                                
-- stdout --
	* [functional-759345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node functional-759345 in cluster functional-759345
	* Pulling base image ...
	* Updating the running docker "functional-759345" container ...
	* Preparing Kubernetes v1.28.1 on containerd 1.6.22 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:50:59.261065  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.261493  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.261763  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.262006  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.262257  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-th28x" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.262440  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.275028  521996 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	E0914 18:50:59.405503  521996 start.go:882] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-759345": Get "https://192.168.49.2:8441/api/v1/nodes/functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-759345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 15.700629359s for "functional-759345" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-759345
helpers_test.go:235: (dbg) docker inspect functional-759345:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74",
	        "Created": "2023-09-14T18:49:34.627644488Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T18:49:34.989493968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d5e38ecae883e5d7fbaaccc26de9209a95c7f11864ba7a4058d1702f044efe72",
	        "ResolvConfPath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/hostname",
	        "HostsPath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/hosts",
	        "LogPath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74-json.log",
	        "Name": "/functional-759345",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-759345:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-759345",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d-init/diff:/var/lib/docker/overlay2/b22941fdffad93645039179e8c1eee3cd74765d1689d400cab1ec16e85e4dbbf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-759345",
	                "Source": "/var/lib/docker/volumes/functional-759345/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-759345",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-759345",
	                "name.minikube.sigs.k8s.io": "functional-759345",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb843a5bf256ed5327fb8ca773c65b1271c15b140f69312a55e743614b517470",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb843a5bf256",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-759345": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1541df32776c",
	                        "functional-759345"
	                    ],
	                    "NetworkID": "8a85198388218c32bae5cb9e94a3a74f580a87b5edd3d73974881f8a2d9b5947",
	                    "EndpointID": "25026d10ce7681ad07866e695cadbdf17e6b398e57c55ab974612b1312c81796",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-759345 -n functional-759345
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-759345 -n functional-759345: exit status 2 (340.406868ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 logs -n 25: (1.71113393s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-700163                                                         | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	| start   | -p functional-759345                                                     | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:50 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-759345                                                     | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | minikube-local-cache-test:functional-759345                              |                   |         |         |                     |                     |
	| cache   | functional-759345 cache delete                                           | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | minikube-local-cache-test:functional-759345                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	| ssh     | functional-759345 ssh sudo                                               | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-759345                                                        | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-759345 ssh                                                    | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-759345 cache reload                                           | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	| ssh     | functional-759345 ssh                                                    | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-759345 kubectl --                                             | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | --context functional-759345                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-759345                                                     | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:50:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:50:43.773466  521996 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:50:43.773676  521996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:50:43.773680  521996 out.go:309] Setting ErrFile to fd 2...
	I0914 18:50:43.773685  521996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:50:43.773935  521996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:50:43.774338  521996 out.go:303] Setting JSON to false
	I0914 18:50:43.775389  521996 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16387,"bootTime":1694701057,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:50:43.775449  521996 start.go:138] virtualization:  
	I0914 18:50:43.777961  521996 out.go:177] * [functional-759345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:50:43.780289  521996 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:50:43.780495  521996 notify.go:220] Checking for updates...
	I0914 18:50:43.784406  521996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:50:43.786363  521996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:50:43.788361  521996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:50:43.789983  521996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:50:43.791686  521996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:50:43.794060  521996 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:50:43.794161  521996 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:50:43.820493  521996 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:50:43.820616  521996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:50:43.899581  521996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-14 18:50:43.890269867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:50:43.899669  521996 docker.go:294] overlay module found
	I0914 18:50:43.901685  521996 out.go:177] * Using the docker driver based on existing profile
	I0914 18:50:43.903711  521996 start.go:298] selected driver: docker
	I0914 18:50:43.903718  521996 start.go:902] validating driver "docker" against &{Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:50:43.903821  521996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:50:43.903914  521996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:50:43.975798  521996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:48 SystemTime:2023-09-14 18:50:43.966696431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:50:43.976196  521996 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:50:43.976250  521996 cni.go:84] Creating CNI manager for ""
	I0914 18:50:43.976256  521996 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:50:43.976266  521996 start_flags.go:321] config:
	{Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:50:43.980370  521996 out.go:177] * Starting control plane node functional-759345 in cluster functional-759345
	I0914 18:50:43.982631  521996 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0914 18:50:43.984683  521996 out.go:177] * Pulling base image ...
	I0914 18:50:43.986964  521996 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:50:43.987015  521996 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4
	I0914 18:50:43.987022  521996 cache.go:57] Caching tarball of preloaded images
	I0914 18:50:43.987044  521996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0914 18:50:43.987106  521996 preload.go:174] Found /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 18:50:43.987115  521996 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on containerd
	I0914 18:50:43.987223  521996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/config.json ...
	I0914 18:50:44.010103  521996 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0914 18:50:44.010122  521996 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0914 18:50:44.010146  521996 cache.go:195] Successfully downloaded all kic artifacts
	I0914 18:50:44.010180  521996 start.go:365] acquiring machines lock for functional-759345: {Name:mka6c7880e02c7b8fafdad11b137b4a7f14a8d64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:50:44.010270  521996 start.go:369] acquired machines lock for "functional-759345" in 62.277µs
	I0914 18:50:44.010292  521996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:50:44.010297  521996 fix.go:54] fixHost starting: 
	I0914 18:50:44.010598  521996 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	I0914 18:50:44.029939  521996 fix.go:102] recreateIfNeeded on functional-759345: state=Running err=<nil>
	W0914 18:50:44.029979  521996 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 18:50:44.032633  521996 out.go:177] * Updating the running docker "functional-759345" container ...
	I0914 18:50:44.034590  521996 machine.go:88] provisioning docker machine ...
	I0914 18:50:44.034634  521996 ubuntu.go:169] provisioning hostname "functional-759345"
	I0914 18:50:44.034706  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:44.053383  521996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:50:44.053799  521996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0914 18:50:44.053809  521996 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-759345 && echo "functional-759345" | sudo tee /etc/hostname
	I0914 18:50:44.208913  521996 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-759345
	
	I0914 18:50:44.208982  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:44.232022  521996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:50:44.232427  521996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0914 18:50:44.232443  521996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-759345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-759345/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-759345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:50:44.370611  521996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:50:44.370627  521996 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17217-492678/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-492678/.minikube}
	I0914 18:50:44.370655  521996 ubuntu.go:177] setting up certificates
	I0914 18:50:44.370663  521996 provision.go:83] configureAuth start
	I0914 18:50:44.370727  521996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-759345
	I0914 18:50:44.389048  521996 provision.go:138] copyHostCerts
	I0914 18:50:44.389106  521996 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem, removing ...
	I0914 18:50:44.389114  521996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem
	I0914 18:50:44.389191  521996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem (1082 bytes)
	I0914 18:50:44.389293  521996 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem, removing ...
	I0914 18:50:44.389297  521996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem
	I0914 18:50:44.389324  521996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem (1123 bytes)
	I0914 18:50:44.389379  521996 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem, removing ...
	I0914 18:50:44.389382  521996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem
	I0914 18:50:44.389406  521996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem (1679 bytes)
	I0914 18:50:44.389448  521996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem org=jenkins.functional-759345 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-759345]
	I0914 18:50:45.480410  521996 provision.go:172] copyRemoteCerts
	I0914 18:50:45.480465  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:50:45.480504  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:45.500868  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:45.599668  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:50:45.631197  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 18:50:45.660537  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:50:45.689217  521996 provision.go:86] duration metric: configureAuth took 1.318536338s
	I0914 18:50:45.689241  521996 ubuntu.go:193] setting minikube options for container-runtime
	I0914 18:50:45.689471  521996 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:50:45.689477  521996 machine.go:91] provisioned docker machine in 1.654879849s
	I0914 18:50:45.689483  521996 start.go:300] post-start starting for "functional-759345" (driver="docker")
	I0914 18:50:45.689493  521996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:50:45.689545  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:50:45.689580  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:45.709661  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:45.814891  521996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:50:45.820551  521996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 18:50:45.820605  521996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 18:50:45.820615  521996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 18:50:45.820622  521996 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 18:50:45.820631  521996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/addons for local assets ...
	I0914 18:50:45.820696  521996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/files for local assets ...
	I0914 18:50:45.820770  521996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem -> 4980292.pem in /etc/ssl/certs
	I0914 18:50:45.820852  521996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/test/nested/copy/498029/hosts -> hosts in /etc/test/nested/copy/498029
	I0914 18:50:45.820899  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/498029
	I0914 18:50:45.832370  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem --> /etc/ssl/certs/4980292.pem (1708 bytes)
	I0914 18:50:45.862206  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/test/nested/copy/498029/hosts --> /etc/test/nested/copy/498029/hosts (40 bytes)
	I0914 18:50:45.892276  521996 start.go:303] post-start completed in 202.777219ms
	I0914 18:50:45.892346  521996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:50:45.892388  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:45.910384  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:46.010408  521996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 18:50:46.017355  521996 fix.go:56] fixHost completed within 2.007047862s
	I0914 18:50:46.017369  521996 start.go:83] releasing machines lock for "functional-759345", held for 2.007091357s
	I0914 18:50:46.017446  521996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-759345
	I0914 18:50:46.035560  521996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:50:46.035603  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:46.035846  521996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:50:46.035889  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:46.058079  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:46.060861  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:46.286469  521996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:50:46.292491  521996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 18:50:46.298508  521996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 18:50:46.321686  521996 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 18:50:46.321757  521996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:50:46.332988  521996 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 18:50:46.333001  521996 start.go:469] detecting cgroup driver to use...
	I0914 18:50:46.333032  521996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 18:50:46.333088  521996 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 18:50:46.348433  521996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 18:50:46.362645  521996 docker.go:196] disabling cri-docker service (if available) ...
	I0914 18:50:46.362706  521996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:50:46.379268  521996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:50:46.393127  521996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:50:46.519813  521996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:50:46.648439  521996 docker.go:212] disabling docker service ...
	I0914 18:50:46.648497  521996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:50:46.667094  521996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:50:46.682207  521996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:50:46.810221  521996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:50:46.934642  521996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:50:46.949724  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:50:46.970367  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 18:50:46.983129  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 18:50:46.995930  521996 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 18:50:46.996007  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 18:50:47.009506  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:50:47.021923  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 18:50:47.034337  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:50:47.048136  521996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:50:47.060457  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 18:50:47.073505  521996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:50:47.084473  521996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:50:47.095380  521996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:50:47.220202  521996 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 18:50:47.435451  521996 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 18:50:47.435513  521996 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 18:50:47.440635  521996 start.go:537] Will wait 60s for crictl version
	I0914 18:50:47.440690  521996 ssh_runner.go:195] Run: which crictl
	I0914 18:50:47.445110  521996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:50:47.500373  521996 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.22
	RuntimeApiVersion:  v1
	I0914 18:50:47.500429  521996 ssh_runner.go:195] Run: containerd --version
	I0914 18:50:47.533973  521996 ssh_runner.go:195] Run: containerd --version
	I0914 18:50:47.574376  521996 out.go:177] * Preparing Kubernetes v1.28.1 on containerd 1.6.22 ...
	I0914 18:50:47.576188  521996 cli_runner.go:164] Run: docker network inspect functional-759345 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:50:47.593551  521996 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 18:50:47.600385  521996 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0914 18:50:47.602346  521996 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:50:47.602429  521996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:50:47.663782  521996 containerd.go:604] all images are preloaded for containerd runtime.
	I0914 18:50:47.663793  521996 containerd.go:518] Images already preloaded, skipping extraction
	I0914 18:50:47.663849  521996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:50:47.709563  521996 containerd.go:604] all images are preloaded for containerd runtime.
	I0914 18:50:47.709574  521996 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:50:47.709647  521996 ssh_runner.go:195] Run: sudo crictl info
	I0914 18:50:47.753147  521996 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0914 18:50:47.753169  521996 cni.go:84] Creating CNI manager for ""
	I0914 18:50:47.753175  521996 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:50:47.753183  521996 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 18:50:47.753201  521996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-759345 NodeName:functional-759345 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:50:47.753359  521996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-759345"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:50:47.753423  521996 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-759345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0914 18:50:47.753503  521996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 18:50:47.765115  521996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:50:47.765181  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:50:47.777187  521996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0914 18:50:47.798901  521996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:50:47.820529  521996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0914 18:50:47.842636  521996 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 18:50:47.847320  521996 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345 for IP: 192.168.49.2
	I0914 18:50:47.847342  521996 certs.go:190] acquiring lock for shared ca certs: {Name:mka5985e85be7ad08b440e022e8dd6d327029a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:50:47.847469  521996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key
	I0914 18:50:47.847504  521996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key
	I0914 18:50:47.847575  521996 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.key
	I0914 18:50:47.847619  521996 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/apiserver.key.dd3b5fb2
	I0914 18:50:47.847655  521996 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/proxy-client.key
	I0914 18:50:47.847778  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem (1338 bytes)
	W0914 18:50:47.847805  521996 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029_empty.pem, impossibly tiny 0 bytes
	I0914 18:50:47.847814  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:50:47.847837  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:50:47.847860  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:50:47.847885  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem (1679 bytes)
	I0914 18:50:47.847941  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem (1708 bytes)
	I0914 18:50:47.848715  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 18:50:47.878526  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:50:47.908268  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:50:47.938593  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:50:47.967532  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:50:47.999408  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 18:50:48.048868  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:50:48.081205  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:50:48.113734  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:50:48.144456  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem --> /usr/share/ca-certificates/498029.pem (1338 bytes)
	I0914 18:50:48.175909  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem --> /usr/share/ca-certificates/4980292.pem (1708 bytes)
	I0914 18:50:48.205388  521996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:50:48.229353  521996 ssh_runner.go:195] Run: openssl version
	I0914 18:50:48.236433  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:50:48.248642  521996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:50:48.253350  521996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:50:48.253407  521996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:50:48.262282  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:50:48.273480  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/498029.pem && ln -fs /usr/share/ca-certificates/498029.pem /etc/ssl/certs/498029.pem"
	I0914 18:50:48.285008  521996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/498029.pem
	I0914 18:50:48.289762  521996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:49 /usr/share/ca-certificates/498029.pem
	I0914 18:50:48.289819  521996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/498029.pem
	I0914 18:50:48.298440  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/498029.pem /etc/ssl/certs/51391683.0"
	I0914 18:50:48.309112  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4980292.pem && ln -fs /usr/share/ca-certificates/4980292.pem /etc/ssl/certs/4980292.pem"
	I0914 18:50:48.321054  521996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4980292.pem
	I0914 18:50:48.326078  521996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:49 /usr/share/ca-certificates/4980292.pem
	I0914 18:50:48.326132  521996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4980292.pem
	I0914 18:50:48.334609  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4980292.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:50:48.345669  521996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 18:50:48.350088  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:50:48.358802  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:50:48.367530  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:50:48.375990  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:50:48.384504  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:50:48.393181  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:50:48.402938  521996 kubeadm.go:404] StartCluster: {Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:50:48.403026  521996 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 18:50:48.403088  521996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:50:48.454355  521996 cri.go:89] found id: "0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098"
	I0914 18:50:48.454368  521996 cri.go:89] found id: "9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f"
	I0914 18:50:48.454372  521996 cri.go:89] found id: "e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343"
	I0914 18:50:48.454375  521996 cri.go:89] found id: "b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81"
	I0914 18:50:48.454378  521996 cri.go:89] found id: "a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81"
	I0914 18:50:48.454382  521996 cri.go:89] found id: "8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da"
	I0914 18:50:48.454386  521996 cri.go:89] found id: "1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3"
	I0914 18:50:48.454389  521996 cri.go:89] found id: "2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64"
	I0914 18:50:48.454392  521996 cri.go:89] found id: "9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624"
	I0914 18:50:48.454398  521996 cri.go:89] found id: ""
	I0914 18:50:48.454453  521996 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0914 18:50:48.490161  521996 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","pid":2124,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0/rootfs","created":"2023-09-14T18:50:25.406413135Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-8gmx4_54060bf5-109d-46ae-9109-334e69e27e07","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-8gmx
4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"54060bf5-109d-46ae-9109-334e69e27e07"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098","pid":2949,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098/rootfs","created":"2023-09-14T18:50:42.555545836Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4525561a-da2
1-495e-b7d3-5515c83d50df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","pid":1674,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81/rootfs","created":"2023-09-14T18:50:11.761642689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4525561a-da21-495e-b7d3-5515c83d50df","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namesp
ace":"kube-system","io.kubernetes.cri.sandbox-uid":"4525561a-da21-495e-b7d3-5515c83d50df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3","pid":1320,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3/rootfs","created":"2023-09-14T18:49:50.232472523Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri.sandbox-id":"28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"36bdd136296b0d2b4232a27e95688fee"},"owner":"
root"},{"ociVersion":"1.0.2-dev","id":"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64","pid":1309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64/rootfs","created":"2023-09-14T18:49:50.238080182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri.sandbox-id":"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b06718b4c7fa973ebc40bb50dcf6660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c3
91e","pid":1201,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e/rootfs","created":"2023-09-14T18:49:50.034952233Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-759345_36bdd136296b0d2b4232a27e95688fee","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"36bdd136296b0d2b4232a27e95688fee"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da","pid":1328,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da/rootfs","created":"2023-09-14T18:49:50.252838235Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri.sandbox-id":"e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2a868197f3e71fd109fc9a68b9758d0c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9063c389737122a8b353c82dbd780138b381290945ac4af856
e514687abb3624","pid":1257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624/rootfs","created":"2023-09-14T18:49:50.134582341Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","io.kubernetes.cri.sandbox-name":"etcd-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ced9b208564f27cb5f2c00ad557393d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","pid":1161,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/96a1a963d346417b99eeb
60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d/rootfs","created":"2023-09-14T18:49:49.970070596Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-759345_ced9b208564f27cb5f2c00ad557393d5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ced9b208564f27cb5f2c00ad557393d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f","pid":2151,"status":"running","bundle":"/run/conta
inerd/io.containerd.runtime.v2.task/k8s.io/9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f/rootfs","created":"2023-09-14T18:50:25.502775749Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-8gmx4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"54060bf5-109d-46ae-9109-334e69e27e07"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","pid":1807,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851/rootfs","created":"2023-09-14T18:50:11.983815624Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-th28x_33dc7ee0-d321-46ce-aa60-311175ef90f3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-th28x","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"33dc7ee0-d321-46ce-aa60-311175ef90f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81","pid":1853,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b941362db
a88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81/rootfs","created":"2023-09-14T18:50:12.131418599Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri.sandbox-id":"aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","io.kubernetes.cri.sandbox-name":"kube-proxy-th28x","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"33dc7ee0-d321-46ce-aa60-311175ef90f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","pid":1175,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb/rootfs","created":"2023-09-14T18:49:49.99123745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-759345_1b06718b4c7fa973ebc40bb50dcf6660","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b06718b4c7fa973ebc40bb50dcf6660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470
b00d535167badf382a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a/rootfs","created":"2023-09-14T18:49:49.993455769Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-759345_2a868197f3e71fd109fc9a68b9758d0c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2a868197f3e71fd109fc9a68b9758d0c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343","pid":2023,"status":"running","bundle
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343/rootfs","created":"2023-09-14T18:50:13.326378076Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026","io.kubernetes.cri.sandbox-name":"kindnet-lrpkn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c3084e0a-78b3-4888-bb8f-f70cc32083a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026","pid":1808,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f4
1417f4026","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026/rootfs","created":"2023-09-14T18:50:12.026330101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-lrpkn_c3084e0a-78b3-4888-bb8f-f70cc32083a7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-lrpkn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c3084e0a-78b3-4888-bb8f-f70cc32083a7"},"owner":"root"}]
	I0914 18:50:48.490448  521996 cri.go:126] list returned 16 containers
	I0914 18:50:48.490454  521996 cri.go:129] container: {ID:0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0 Status:running}
	I0914 18:50:48.490468  521996 cri.go:131] skipping 0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0 - not in ps
	I0914 18:50:48.490472  521996 cri.go:129] container: {ID:0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 Status:running}
	I0914 18:50:48.490478  521996 cri.go:135] skipping {0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 running}: state = "running", want "paused"
	I0914 18:50:48.490487  521996 cri.go:129] container: {ID:189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81 Status:running}
	I0914 18:50:48.490493  521996 cri.go:131] skipping 189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81 - not in ps
	I0914 18:50:48.490497  521996 cri.go:129] container: {ID:1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 Status:running}
	I0914 18:50:48.490503  521996 cri.go:135] skipping {1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 running}: state = "running", want "paused"
	I0914 18:50:48.490508  521996 cri.go:129] container: {ID:2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 Status:running}
	I0914 18:50:48.490514  521996 cri.go:135] skipping {2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 running}: state = "running", want "paused"
	I0914 18:50:48.490520  521996 cri.go:129] container: {ID:28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e Status:running}
	I0914 18:50:48.490526  521996 cri.go:131] skipping 28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e - not in ps
	I0914 18:50:48.490530  521996 cri.go:129] container: {ID:8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da Status:running}
	I0914 18:50:48.490535  521996 cri.go:135] skipping {8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da running}: state = "running", want "paused"
	I0914 18:50:48.490540  521996 cri.go:129] container: {ID:9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624 Status:running}
	I0914 18:50:48.490546  521996 cri.go:135] skipping {9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624 running}: state = "running", want "paused"
	I0914 18:50:48.490551  521996 cri.go:129] container: {ID:96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d Status:running}
	I0914 18:50:48.490559  521996 cri.go:131] skipping 96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d - not in ps
	I0914 18:50:48.490563  521996 cri.go:129] container: {ID:9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f Status:running}
	I0914 18:50:48.490569  521996 cri.go:135] skipping {9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f running}: state = "running", want "paused"
	I0914 18:50:48.490574  521996 cri.go:129] container: {ID:aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851 Status:running}
	I0914 18:50:48.490579  521996 cri.go:131] skipping aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851 - not in ps
	I0914 18:50:48.490583  521996 cri.go:129] container: {ID:b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 Status:running}
	I0914 18:50:48.490589  521996 cri.go:135] skipping {b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 running}: state = "running", want "paused"
	I0914 18:50:48.490593  521996 cri.go:129] container: {ID:d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb Status:running}
	I0914 18:50:48.490599  521996 cri.go:131] skipping d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb - not in ps
	I0914 18:50:48.490603  521996 cri.go:129] container: {ID:e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a Status:running}
	I0914 18:50:48.490609  521996 cri.go:131] skipping e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a - not in ps
	I0914 18:50:48.490613  521996 cri.go:129] container: {ID:e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 Status:running}
	I0914 18:50:48.490619  521996 cri.go:135] skipping {e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 running}: state = "running", want "paused"
	I0914 18:50:48.490623  521996 cri.go:129] container: {ID:f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026 Status:running}
	I0914 18:50:48.490629  521996 cri.go:131] skipping f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026 - not in ps
	I0914 18:50:48.490683  521996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:50:48.501723  521996 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 18:50:48.501734  521996 kubeadm.go:636] restartCluster start
	I0914 18:50:48.501788  521996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:50:48.512345  521996 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:50:48.512925  521996 kubeconfig.go:92] found "functional-759345" server: "https://192.168.49.2:8441"
	I0914 18:50:48.514671  521996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:50:48.525799  521996 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-09-14 18:49:41.991318920 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-09-14 18:50:47.835456425 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0914 18:50:48.525808  521996 kubeadm.go:1128] stopping kube-system containers ...
	I0914 18:50:48.525818  521996 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0914 18:50:48.525874  521996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:50:48.568435  521996 cri.go:89] found id: "0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098"
	I0914 18:50:48.568447  521996 cri.go:89] found id: "9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f"
	I0914 18:50:48.568461  521996 cri.go:89] found id: "e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343"
	I0914 18:50:48.568465  521996 cri.go:89] found id: "b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81"
	I0914 18:50:48.568468  521996 cri.go:89] found id: "a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81"
	I0914 18:50:48.568473  521996 cri.go:89] found id: "8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da"
	I0914 18:50:48.568476  521996 cri.go:89] found id: "1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3"
	I0914 18:50:48.568489  521996 cri.go:89] found id: "2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64"
	I0914 18:50:48.568494  521996 cri.go:89] found id: "9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624"
	I0914 18:50:48.568500  521996 cri.go:89] found id: ""
	I0914 18:50:48.568504  521996 cri.go:234] Stopping containers: [0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624]
	I0914 18:50:48.568572  521996 ssh_runner.go:195] Run: which crictl
	I0914 18:50:48.573210  521996 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624
	I0914 18:50:53.846962  521996 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624: (5.273720689s)
	W0914 18:50:53.847015  521996 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624: Process exited with status 1
	stdout:
	0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098
	9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f
	e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343
	b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81
	
	stderr:
	E0914 18:50:53.843934    3441 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81\": not found" containerID="a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81"
	time="2023-09-14T18:50:53Z" level=fatal msg="stopping the container \"a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81\": not found"
	I0914 18:50:53.847076  521996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:50:53.917191  521996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:50:53.928254  521996 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 14 18:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 14 18:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 14 18:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 14 18:49 /etc/kubernetes/scheduler.conf
	
	I0914 18:50:53.928314  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0914 18:50:53.939924  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0914 18:50:53.951439  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0914 18:50:53.962872  521996 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:50:53.962943  521996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:50:53.973931  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0914 18:50:53.984761  521996 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:50:53.984818  521996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:50:53.995349  521996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:50:54.008876  521996 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 18:50:54.008892  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:54.078076  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.492408  521996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.414303118s)
	I0914 18:50:56.492430  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.736359  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.829024  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.921090  521996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:50:56.921153  521996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:50:56.936745  521996 api_server.go:72] duration metric: took 15.654613ms to wait for apiserver process to appear ...
	I0914 18:50:56.936758  521996 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:50:56.936773  521996 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0914 18:50:56.947201  521996 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0914 18:50:56.965309  521996 api_server.go:141] control plane version: v1.28.1
	I0914 18:50:56.965326  521996 api_server.go:131] duration metric: took 28.562381ms to wait for apiserver health ...
	I0914 18:50:56.965334  521996 cni.go:84] Creating CNI manager for ""
	I0914 18:50:56.965340  521996 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:50:56.968245  521996 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 18:50:56.970909  521996 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 18:50:56.976395  521996 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 18:50:56.976406  521996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 18:50:57.012302  521996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 18:50:57.504552  521996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:50:57.519513  521996 system_pods.go:59] 8 kube-system pods found
	I0914 18:50:57.519529  521996 system_pods.go:61] "coredns-5dd5756b68-8gmx4" [54060bf5-109d-46ae-9109-334e69e27e07] Running
	I0914 18:50:57.519534  521996 system_pods.go:61] "etcd-functional-759345" [1f30a799-d227-43e0-8599-81d36bac12c3] Running
	I0914 18:50:57.519538  521996 system_pods.go:61] "kindnet-lrpkn" [c3084e0a-78b3-4888-bb8f-f70cc32083a7] Running
	I0914 18:50:57.519543  521996 system_pods.go:61] "kube-apiserver-functional-759345" [650e7ed1-8a71-4907-95b4-b939c85b8b4d] Running
	I0914 18:50:57.519547  521996 system_pods.go:61] "kube-controller-manager-functional-759345" [bdfdc9e5-5fee-4d58-81fe-03b7cccef329] Running
	I0914 18:50:57.519551  521996 system_pods.go:61] "kube-proxy-th28x" [33dc7ee0-d321-46ce-aa60-311175ef90f3] Running
	I0914 18:50:57.519555  521996 system_pods.go:61] "kube-scheduler-functional-759345" [6b3625d9-1977-4c7c-b7d9-db355bb3836b] Running
	I0914 18:50:57.519563  521996 system_pods.go:61] "storage-provisioner" [4525561a-da21-495e-b7d3-5515c83d50df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:50:57.519570  521996 system_pods.go:74] duration metric: took 15.008777ms to wait for pod list to return data ...
	I0914 18:50:57.519577  521996 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:50:57.522915  521996 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 18:50:57.522933  521996 node_conditions.go:123] node cpu capacity is 2
	I0914 18:50:57.522943  521996 node_conditions.go:105] duration metric: took 3.361826ms to run NodePressure ...
	I0914 18:50:57.522959  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:57.743113  521996 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 18:50:57.748699  521996 retry.go:31] will retry after 212.268379ms: kubelet not initialised
	I0914 18:50:57.994491  521996 retry.go:31] will retry after 212.337309ms: kubelet not initialised
	I0914 18:50:58.223408  521996 kubeadm.go:787] kubelet initialised
	I0914 18:50:58.223417  521996 kubeadm.go:788] duration metric: took 480.292581ms waiting for restarted kubelet to initialise ...
	I0914 18:50:58.223426  521996 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:50:58.235226  521996 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261039  521996 pod_ready.go:97] error getting pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261055  521996 pod_ready.go:81] duration metric: took 1.025804117s waiting for pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.261065  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261148  521996 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261473  521996 pod_ready.go:97] error getting pod "etcd-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261484  521996 pod_ready.go:81] duration metric: took 328.747µs waiting for pod "etcd-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.261493  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261514  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261749  521996 pod_ready.go:97] error getting pod "kube-apiserver-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261756  521996 pod_ready.go:81] duration metric: took 234.659µs waiting for pod "kube-apiserver-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.261763  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261779  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261990  521996 pod_ready.go:97] error getting pod "kube-controller-manager-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261999  521996 pod_ready.go:81] duration metric: took 214.089µs waiting for pod "kube-controller-manager-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.262006  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262025  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-th28x" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.262242  521996 pod_ready.go:97] error getting pod "kube-proxy-th28x" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262251  521996 pod_ready.go:81] duration metric: took 219.086µs waiting for pod "kube-proxy-th28x" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.262257  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-th28x" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262275  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.262428  521996 pod_ready.go:97] error getting pod "kube-scheduler-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262434  521996 pod_ready.go:81] duration metric: took 153.657µs waiting for pod "kube-scheduler-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.262440  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262457  521996 pod_ready.go:38] duration metric: took 1.039024058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:50:59.262474  521996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0914 18:50:59.272278  521996 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0914 18:50:59.272292  521996 kubeadm.go:640] restartCluster took 10.770552378s
	I0914 18:50:59.272298  521996 kubeadm.go:406] StartCluster complete in 10.869371087s
	I0914 18:50:59.272311  521996 settings.go:142] acquiring lock: {Name:mkfaf0f329c2736368d7fc21433e53e0c9a5b1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:50:59.272373  521996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:50:59.273063  521996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/kubeconfig: {Name:mk6a8e8b5c770de881617bb4e8ebf560fd4b6800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:50:59.273295  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 18:50:59.273606  521996 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:50:59.273725  521996 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 18:50:59.273782  521996 addons.go:69] Setting storage-provisioner=true in profile "functional-759345"
	I0914 18:50:59.273806  521996 addons.go:231] Setting addon storage-provisioner=true in "functional-759345"
	W0914 18:50:59.273811  521996 addons.go:240] addon storage-provisioner should already be in state true
	I0914 18:50:59.273874  521996 host.go:66] Checking if "functional-759345" exists ...
	I0914 18:50:59.274383  521996 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	I0914 18:50:59.274448  521996 addons.go:69] Setting default-storageclass=true in profile "functional-759345"
	I0914 18:50:59.274462  521996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-759345"
	I0914 18:50:59.274824  521996 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	W0914 18:50:59.275016  521996 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-759345" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.275028  521996 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.275064  521996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:50:59.277923  521996 out.go:177] * Verifying Kubernetes components...
	I0914 18:50:59.280238  521996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:50:59.319059  521996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0914 18:50:59.318692  521996 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	I0914 18:50:59.320760  521996 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:50:59.320769  521996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:50:59.320832  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:59.341007  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	E0914 18:50:59.405503  521996 start.go:882] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0914 18:50:59.405524  521996 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0914 18:50:59.405540  521996 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0914 18:50:59.405549  521996 node_ready.go:35] waiting up to 6m0s for node "functional-759345" to be "Ready" ...
	I0914 18:50:59.405881  521996 node_ready.go:53] error getting node "functional-759345": Get "https://192.168.49.2:8441/api/v1/nodes/functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.405891  521996 node_ready.go:38] duration metric: took 329.24µs waiting for node "functional-759345" to be "Ready" ...
	I0914 18:50:59.407559  521996 out.go:177] 
	W0914 18:50:59.409592  521996 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-759345": Get "https://192.168.49.2:8441/api/v1/nodes/functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:50:59.409614  521996 out.go:239] * 
	W0914 18:50:59.410665  521996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:50:59.413720  521996 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1ba8309bf660e       04b4eaa3d3db8       2 seconds ago        Running             kindnet-cni               1                   f04e4e2a517d6       kindnet-lrpkn
	30ab48e06d49d       ba04bb24b9575       2 seconds ago        Running             storage-provisioner       2                   189fc09234981       storage-provisioner
	08745f5823081       b29fb62480892       2 seconds ago        Exited              kube-apiserver            1                   7ff4558e377ff       kube-apiserver-functional-759345
	b7a6c40efcf70       97e04611ad434       2 seconds ago        Running             coredns                   1                   0a40c1201c6b1       coredns-5dd5756b68-8gmx4
	69fc78d71f274       812f5241df7fd       2 seconds ago        Running             kube-proxy                1                   aff5f56a67e9e       kube-proxy-th28x
	0fb83152a87c7       ba04bb24b9575       18 seconds ago       Exited              storage-provisioner       1                   189fc09234981       storage-provisioner
	9c061c51a7996       97e04611ad434       35 seconds ago       Exited              coredns                   0                   0a40c1201c6b1       coredns-5dd5756b68-8gmx4
	e94bc3a965652       04b4eaa3d3db8       47 seconds ago       Exited              kindnet-cni               0                   f04e4e2a517d6       kindnet-lrpkn
	b941362dba88e       812f5241df7fd       48 seconds ago       Exited              kube-proxy                0                   aff5f56a67e9e       kube-proxy-th28x
	8770ffd08048f       8b6e1980b7584       About a minute ago   Running             kube-controller-manager   0                   e80fd32a4bd0a       kube-controller-manager-functional-759345
	1d92ce7b37e23       b4a5a57e99492       About a minute ago   Running             kube-scheduler            0                   28082c135185c       kube-scheduler-functional-759345
	9063c38973712       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   96a1a963d3464       etcd-functional-759345
	
	* 
	* ==> containerd <==
	* Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.570737965Z" level=info msg="StartContainer for \"1ba8309bf660e9c8a958e80210f114290ccd4063c5ae2ef82b9b9d50d130f591\" returns successfully"
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.667718918Z" level=info msg="StartContainer for \"30ab48e06d49d30e0f4194ba432a102e542eac20b5c70a8d57e148c0fd04ec2f\" returns successfully"
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.699694522Z" level=info msg="shim disconnected" id=08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.699843684Z" level=warning msg="cleaning up after shim disconnected" id=08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc namespace=k8s.io
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.699856443Z" level=info msg="cleaning up dead shim"
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.716640968Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:50:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3994 runtime=io.containerd.runc.v2\n"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.059888045Z" level=info msg="StopContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" with timeout 2 (s)"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.060224209Z" level=info msg="Stop container \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" with signal terminated"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.146573376Z" level=info msg="shim disconnected" id=d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.146638000Z" level=warning msg="cleaning up after shim disconnected" id=d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb namespace=k8s.io
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.146648995Z" level=info msg="cleaning up dead shim"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.152213811Z" level=info msg="shim disconnected" id=2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.152275563Z" level=warning msg="cleaning up after shim disconnected" id=2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 namespace=k8s.io
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.152287198Z" level=info msg="cleaning up dead shim"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.165925971Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4167 runtime=io.containerd.runc.v2\n"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.169183781Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.169640348Z" level=info msg="StopContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" returns successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.170734061Z" level=info msg="StopPodSandbox for \"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.170815021Z" level=info msg="Container to stop \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.173005539Z" level=info msg="TearDown network for sandbox \"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb\" successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.173044185Z" level=info msg="StopPodSandbox for \"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb\" returns successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.182845086Z" level=info msg="RemoveContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.189787144Z" level=info msg="RemoveContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" returns successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.195328641Z" level=info msg="RemoveContainer for \"518dce08aac7ba0303423c550d12528f9903c08bc2fddd7c465096541c454a51\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.201197417Z" level=info msg="RemoveContainer for \"518dce08aac7ba0303423c550d12528f9903c08bc2fddd7c465096541c454a51\" returns successfully"
	
	* 
	* ==> coredns [9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46435 - 35346 "HINFO IN 2418697808068076682.3086629869597095228. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03713899s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b7a6c40efcf707c80108e5ce7ae4fff5b4f1c4fe03ec48227b97f39c5f2c9ab4] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45574 - 8669 "HINFO IN 2546873242402138308.1549916675134106726. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01409644s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000738] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001019] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000e6d70ae1
	[  +0.001093] FS-Cache: N-key=[8] '943a5c0100000000'
	[  +0.020369] FS-Cache: Duplicate cookie detected
	[  +0.000880] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000007cc3b60a
	[  +0.001158] FS-Cache: O-key=[8] '943a5c0100000000'
	[  +0.000773] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001149] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=000000009e94c0ae
	[  +0.001230] FS-Cache: N-key=[8] '943a5c0100000000'
	[  +2.856088] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000000e997ae2
	[  +0.001081] FS-Cache: O-key=[8] '933a5c0100000000'
	[  +0.000711] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000e6d70ae1
	[  +0.001069] FS-Cache: N-key=[8] '933a5c0100000000'
	[  +0.399166] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=0000000095ffa149
	[  +0.001108] FS-Cache: O-key=[8] '993a5c0100000000'
	[  +0.000739] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=0000000080474072
	[  +0.001122] FS-Cache: N-key=[8] '993a5c0100000000'
	[ +10.571489] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624] <==
	* {"level":"info","ts":"2023-09-14T18:49:50.243434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-09-14T18:49:50.243558Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-09-14T18:49:50.244092Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T18:49:50.244228Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-09-14T18:49:50.244247Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-09-14T18:49:50.244971Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T18:49:50.24504Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T18:49:50.416134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T18:49:50.416364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T18:49:50.416534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-09-14T18:49:50.416665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.416766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.416873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.416957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.418734Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:49:50.419103Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-759345 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T18:49:50.419232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T18:49:50.420555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T18:49:50.433552Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T18:49:50.433746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T18:49:50.420759Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T18:49:50.438605Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-09-14T18:49:50.421136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:49:50.439123Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:49:50.439253Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:51:00 up  4:33,  0 users,  load average: 2.17, 1.82, 1.34
	Linux functional-759345 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1ba8309bf660e9c8a958e80210f114290ccd4063c5ae2ef82b9b9d50d130f591] <==
	* I0914 18:50:58.637829       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 18:50:58.638100       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0914 18:50:58.638356       1 main.go:116] setting mtu 1500 for CNI 
	I0914 18:50:58.638464       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 18:50:58.638555       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 18:50:59.027645       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:59.028541       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343] <==
	* I0914 18:50:13.428252       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 18:50:13.428321       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0914 18:50:13.428542       1 main.go:116] setting mtu 1500 for CNI 
	I0914 18:50:13.428565       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 18:50:13.428623       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 18:50:13.925827       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:13.925858       1 main.go:227] handling current node
	I0914 18:50:23.941160       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:23.941193       1 main.go:227] handling current node
	I0914 18:50:33.952981       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:33.953008       1 main.go:227] handling current node
	I0914 18:50:43.965247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:43.965284       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc] <==
	* I0914 18:50:58.653236       1 options.go:220] external host was not specified, using 192.168.49.2
	I0914 18:50:58.654424       1 server.go:148] Version: v1.28.1
	I0914 18:50:58.654453       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0914 18:50:58.654708       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da] <==
	* I0914 18:50:10.134309       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wrlqg"
	I0914 18:50:10.171546       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8gmx4"
	I0914 18:50:10.188048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="274.219281ms"
	I0914 18:50:10.213409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.27746ms"
	I0914 18:50:10.213585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.867µs"
	I0914 18:50:10.221169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.699µs"
	I0914 18:50:10.277168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.273µs"
	I0914 18:50:10.290617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.252µs"
	I0914 18:50:10.376051       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0914 18:50:10.401175       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wrlqg"
	I0914 18:50:10.423094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.886598ms"
	I0914 18:50:10.459742       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 18:50:10.462963       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 18:50:10.462992       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0914 18:50:10.480397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.16597ms"
	I0914 18:50:10.480681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.257µs"
	I0914 18:50:10.480834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.642µs"
	I0914 18:50:12.428148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.344µs"
	I0914 18:50:12.435711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.067µs"
	I0914 18:50:12.445350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="190.925µs"
	I0914 18:50:26.438919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.173µs"
	I0914 18:50:26.468455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.56249ms"
	I0914 18:50:26.468560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.441µs"
	I0914 18:50:57.942505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.323719ms"
	I0914 18:50:57.942655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.648µs"
	
	* 
	* ==> kube-proxy [69fc78d71f2742c1007548b03bf0bc1359bb50f645cc95a6f3d69a87a928464b] <==
	* I0914 18:50:58.775894       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 18:50:58.778354       1 server_others.go:152] "Using iptables Proxier"
	I0914 18:50:58.778392       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 18:50:58.778405       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 18:50:58.778518       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 18:50:58.778765       1 server.go:846] "Version info" version="v1.28.1"
	I0914 18:50:58.778781       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:50:58.779808       1 config.go:188] "Starting service config controller"
	I0914 18:50:58.779937       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 18:50:58.779979       1 config.go:97] "Starting endpoint slice config controller"
	I0914 18:50:58.779988       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 18:50:58.781015       1 config.go:315] "Starting node config controller"
	I0914 18:50:58.781037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 18:50:58.881070       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 18:50:58.881150       1 shared_informer.go:318] Caches are synced for node config
	I0914 18:50:58.881330       1 shared_informer.go:318] Caches are synced for service config
	W0914 18:50:59.112333       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0914 18:50:59.112380       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0914 18:50:59.112400       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0914 18:51:00.022817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-759345&resourceVersion=484": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:00.022875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-759345&resourceVersion=484": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:00.192876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:00.193027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:00.528866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:00.528925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-proxy [b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81] <==
	* I0914 18:50:12.194628       1 server_others.go:69] "Using iptables proxy"
	I0914 18:50:12.209124       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0914 18:50:12.243944       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 18:50:12.246551       1 server_others.go:152] "Using iptables Proxier"
	I0914 18:50:12.246739       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 18:50:12.246832       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 18:50:12.246952       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 18:50:12.247314       1 server.go:846] "Version info" version="v1.28.1"
	I0914 18:50:12.247646       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:50:12.248786       1 config.go:188] "Starting service config controller"
	I0914 18:50:12.249123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 18:50:12.249292       1 config.go:97] "Starting endpoint slice config controller"
	I0914 18:50:12.249403       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 18:50:12.251534       1 config.go:315] "Starting node config controller"
	I0914 18:50:12.251678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 18:50:12.349620       1 shared_informer.go:318] Caches are synced for service config
	I0914 18:50:12.349696       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 18:50:12.351966       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3] <==
	* W0914 18:49:54.922773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:49:54.923291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 18:49:54.922820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:49:54.923569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 18:49:54.923930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:49:54.924112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 18:49:54.924380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:49:54.924535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 18:49:54.924848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:49:54.925049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 18:49:54.928631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:49:54.928938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 18:49:54.929250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:49:54.929453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 18:49:54.929686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 18:49:54.929847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 18:49:54.930081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:49:54.930282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 18:49:54.930559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 18:49:54.930719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 18:49:54.930949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:49:54.931121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 18:49:54.931604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:49:54.932416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0914 18:49:56.113910       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 14 18:51:00 functional-759345 kubelet[3629]: E0914 18:51:00.212092    3629 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-759345_kube-system(18070792e98a31783321ccb1c8fa0250)\"" pod="kube-system/kube-apiserver-functional-759345" podUID="18070792e98a31783321ccb1c8fa0250"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.212786    3629 status_manager.go:853] "Failed to get status for pod" podUID="18070792e98a31783321ccb1c8fa0250" pod="kube-system/kube-apiserver-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.213193    3629 status_manager.go:853] "Failed to get status for pod" podUID="36bdd136296b0d2b4232a27e95688fee" pod="kube-system/kube-scheduler-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.213574    3629 status_manager.go:853] "Failed to get status for pod" podUID="4525561a-da21-495e-b7d3-5515c83d50df" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.213910    3629 status_manager.go:853] "Failed to get status for pod" podUID="c3084e0a-78b3-4888-bb8f-f70cc32083a7" pod="kube-system/kindnet-lrpkn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-lrpkn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.214251    3629 status_manager.go:853] "Failed to get status for pod" podUID="33dc7ee0-d321-46ce-aa60-311175ef90f3" pod="kube-system/kube-proxy-th28x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.214564    3629 status_manager.go:853] "Failed to get status for pod" podUID="54060bf5-109d-46ae-9109-334e69e27e07" pod="kube-system/coredns-5dd5756b68-8gmx4" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: E0914 18:51:00.643772    3629 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-759345.1784d88ccde2f7b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-759345", UID:"1b06718b4c7fa973ebc40bb50dcf6660", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Stopping container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Hos
t:"functional-759345"}, FirstTimestamp:time.Date(2023, time.September, 14, 18, 50, 59, 59374009, time.Local), LastTimestamp:time.Date(2023, time.September, 14, 18, 50, 59, 59374009, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-759345"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.857156    3629 status_manager.go:853] "Failed to get status for pod" podUID="36bdd136296b0d2b4232a27e95688fee" pod="kube-system/kube-scheduler-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.857648    3629 status_manager.go:853] "Failed to get status for pod" podUID="4525561a-da21-495e-b7d3-5515c83d50df" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.857958    3629 status_manager.go:853] "Failed to get status for pod" podUID="c3084e0a-78b3-4888-bb8f-f70cc32083a7" pod="kube-system/kindnet-lrpkn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-lrpkn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.858303    3629 status_manager.go:853] "Failed to get status for pod" podUID="33dc7ee0-d321-46ce-aa60-311175ef90f3" pod="kube-system/kube-proxy-th28x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.858659    3629 status_manager.go:853] "Failed to get status for pod" podUID="54060bf5-109d-46ae-9109-334e69e27e07" pod="kube-system/coredns-5dd5756b68-8gmx4" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.859008    3629 status_manager.go:853] "Failed to get status for pod" podUID="ced9b208564f27cb5f2c00ad557393d5" pod="kube-system/etcd-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.859332    3629 status_manager.go:853] "Failed to get status for pod" podUID="18070792e98a31783321ccb1c8fa0250" pod="kube-system/kube-apiserver-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.058753    3629 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1b06718b4c7fa973ebc40bb50dcf6660" path="/var/lib/kubelet/pods/1b06718b4c7fa973ebc40bb50dcf6660/volumes"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.213113    3629 scope.go:117] "RemoveContainer" containerID="08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: E0914 18:51:01.213697    3629 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-759345_kube-system(18070792e98a31783321ccb1c8fa0250)\"" pod="kube-system/kube-apiserver-functional-759345" podUID="18070792e98a31783321ccb1c8fa0250"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.213841    3629 status_manager.go:853] "Failed to get status for pod" podUID="54060bf5-109d-46ae-9109-334e69e27e07" pod="kube-system/coredns-5dd5756b68-8gmx4" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.215199    3629 status_manager.go:853] "Failed to get status for pod" podUID="ced9b208564f27cb5f2c00ad557393d5" pod="kube-system/etcd-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.215633    3629 status_manager.go:853] "Failed to get status for pod" podUID="18070792e98a31783321ccb1c8fa0250" pod="kube-system/kube-apiserver-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.216009    3629 status_manager.go:853] "Failed to get status for pod" podUID="36bdd136296b0d2b4232a27e95688fee" pod="kube-system/kube-scheduler-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.216363    3629 status_manager.go:853] "Failed to get status for pod" podUID="4525561a-da21-495e-b7d3-5515c83d50df" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.216746    3629 status_manager.go:853] "Failed to get status for pod" podUID="c3084e0a-78b3-4888-bb8f-f70cc32083a7" pod="kube-system/kindnet-lrpkn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-lrpkn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.217086    3629 status_manager.go:853] "Failed to get status for pod" podUID="33dc7ee0-d321-46ce-aa60-311175ef90f3" pod="kube-system/kube-proxy-th28x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	* 
	* ==> storage-provisioner [0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098] <==
	* I0914 18:50:42.598792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:50:42.618921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:50:42.619064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:50:42.629049       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:50:42.631036       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-759345_5c285082-4544-4263-bda2-0f54cd004cbc!
	I0914 18:50:42.631148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bccc3ad9-bd5f-4e25-8328-902a0a5d0e29", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-759345_5c285082-4544-4263-bda2-0f54cd004cbc became leader
	I0914 18:50:42.731609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-759345_5c285082-4544-4263-bda2-0f54cd004cbc!
	
	* 
	* ==> storage-provisioner [30ab48e06d49d30e0f4194ba432a102e542eac20b5c70a8d57e148c0fd04ec2f] <==
	* I0914 18:50:58.704878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:50:58.737468       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:50:58.737558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:51:00.877483  523390 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-759345 -n functional-759345
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-759345 -n functional-759345: exit status 2 (352.111542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-759345" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (18.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-759345 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-759345 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (72.685942ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-759345 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-759345
helpers_test.go:235: (dbg) docker inspect functional-759345:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74",
	        "Created": "2023-09-14T18:49:34.627644488Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T18:49:34.989493968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d5e38ecae883e5d7fbaaccc26de9209a95c7f11864ba7a4058d1702f044efe72",
	        "ResolvConfPath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/hostname",
	        "HostsPath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/hosts",
	        "LogPath": "/var/lib/docker/containers/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74/1541df32776c02b6d65e7649bb655cba1f7bd343b934b2de666f1b3feb404e74-json.log",
	        "Name": "/functional-759345",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-759345:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-759345",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d-init/diff:/var/lib/docker/overlay2/b22941fdffad93645039179e8c1eee3cd74765d1689d400cab1ec16e85e4dbbf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d5679abea422dcf8ef7c9a640ace570be640b59ee775a6cadc8fa949e57d11d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-759345",
	                "Source": "/var/lib/docker/volumes/functional-759345/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-759345",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-759345",
	                "name.minikube.sigs.k8s.io": "functional-759345",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb843a5bf256ed5327fb8ca773c65b1271c15b140f69312a55e743614b517470",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb843a5bf256",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-759345": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1541df32776c",
	                        "functional-759345"
	                    ],
	                    "NetworkID": "8a85198388218c32bae5cb9e94a3a74f580a87b5edd3d73974881f8a2d9b5947",
	                    "EndpointID": "25026d10ce7681ad07866e695cadbdf17e6b398e57c55ab974612b1312c81796",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-759345 -n functional-759345
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-759345 -n functional-759345: exit status 2 (337.744579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 logs -n 25: (1.605294605s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-700163 --log_dir                                                  | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	|         | /tmp/nospam-700163 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-700163                                                         | nospam-700163     | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:49 UTC |
	| start   | -p functional-759345                                                     | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:49 UTC | 14 Sep 23 18:50 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-759345                                                     | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-759345 cache add                                              | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | minikube-local-cache-test:functional-759345                              |                   |         |         |                     |                     |
	| cache   | functional-759345 cache delete                                           | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | minikube-local-cache-test:functional-759345                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	| ssh     | functional-759345 ssh sudo                                               | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-759345                                                        | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-759345 ssh                                                    | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-759345 cache reload                                           | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	| ssh     | functional-759345 ssh                                                    | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-759345 kubectl --                                             | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC | 14 Sep 23 18:50 UTC |
	|         | --context functional-759345                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-759345                                                     | functional-759345 | jenkins | v1.31.2 | 14 Sep 23 18:50 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:50:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:50:43.773466  521996 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:50:43.773676  521996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:50:43.773680  521996 out.go:309] Setting ErrFile to fd 2...
	I0914 18:50:43.773685  521996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:50:43.773935  521996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:50:43.774338  521996 out.go:303] Setting JSON to false
	I0914 18:50:43.775389  521996 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16387,"bootTime":1694701057,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:50:43.775449  521996 start.go:138] virtualization:  
	I0914 18:50:43.777961  521996 out.go:177] * [functional-759345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:50:43.780289  521996 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:50:43.780495  521996 notify.go:220] Checking for updates...
	I0914 18:50:43.784406  521996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:50:43.786363  521996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:50:43.788361  521996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:50:43.789983  521996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:50:43.791686  521996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:50:43.794060  521996 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:50:43.794161  521996 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:50:43.820493  521996 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:50:43.820616  521996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:50:43.899581  521996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-14 18:50:43.890269867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:50:43.899669  521996 docker.go:294] overlay module found
	I0914 18:50:43.901685  521996 out.go:177] * Using the docker driver based on existing profile
	I0914 18:50:43.903711  521996 start.go:298] selected driver: docker
	I0914 18:50:43.903718  521996 start.go:902] validating driver "docker" against &{Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:50:43.903821  521996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:50:43.903914  521996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:50:43.975798  521996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:48 SystemTime:2023-09-14 18:50:43.966696431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:50:43.976196  521996 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:50:43.976250  521996 cni.go:84] Creating CNI manager for ""
	I0914 18:50:43.976256  521996 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:50:43.976266  521996 start_flags.go:321] config:
	{Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:50:43.980370  521996 out.go:177] * Starting control plane node functional-759345 in cluster functional-759345
	I0914 18:50:43.982631  521996 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0914 18:50:43.984683  521996 out.go:177] * Pulling base image ...
	I0914 18:50:43.986964  521996 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:50:43.987015  521996 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4
	I0914 18:50:43.987022  521996 cache.go:57] Caching tarball of preloaded images
	I0914 18:50:43.987044  521996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0914 18:50:43.987106  521996 preload.go:174] Found /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 18:50:43.987115  521996 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on containerd
	I0914 18:50:43.987223  521996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/config.json ...
	I0914 18:50:44.010103  521996 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0914 18:50:44.010122  521996 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0914 18:50:44.010146  521996 cache.go:195] Successfully downloaded all kic artifacts
	I0914 18:50:44.010180  521996 start.go:365] acquiring machines lock for functional-759345: {Name:mka6c7880e02c7b8fafdad11b137b4a7f14a8d64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:50:44.010270  521996 start.go:369] acquired machines lock for "functional-759345" in 62.277µs
	I0914 18:50:44.010292  521996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:50:44.010297  521996 fix.go:54] fixHost starting: 
	I0914 18:50:44.010598  521996 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	I0914 18:50:44.029939  521996 fix.go:102] recreateIfNeeded on functional-759345: state=Running err=<nil>
	W0914 18:50:44.029979  521996 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 18:50:44.032633  521996 out.go:177] * Updating the running docker "functional-759345" container ...
	I0914 18:50:44.034590  521996 machine.go:88] provisioning docker machine ...
	I0914 18:50:44.034634  521996 ubuntu.go:169] provisioning hostname "functional-759345"
	I0914 18:50:44.034706  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:44.053383  521996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:50:44.053799  521996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0914 18:50:44.053809  521996 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-759345 && echo "functional-759345" | sudo tee /etc/hostname
	I0914 18:50:44.208913  521996 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-759345
	
	I0914 18:50:44.208982  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:44.232022  521996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:50:44.232427  521996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0914 18:50:44.232443  521996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-759345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-759345/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-759345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:50:44.370611  521996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:50:44.370627  521996 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17217-492678/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-492678/.minikube}
	I0914 18:50:44.370655  521996 ubuntu.go:177] setting up certificates
	I0914 18:50:44.370663  521996 provision.go:83] configureAuth start
	I0914 18:50:44.370727  521996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-759345
	I0914 18:50:44.389048  521996 provision.go:138] copyHostCerts
	I0914 18:50:44.389106  521996 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem, removing ...
	I0914 18:50:44.389114  521996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem
	I0914 18:50:44.389191  521996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem (1082 bytes)
	I0914 18:50:44.389293  521996 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem, removing ...
	I0914 18:50:44.389297  521996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem
	I0914 18:50:44.389324  521996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem (1123 bytes)
	I0914 18:50:44.389379  521996 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem, removing ...
	I0914 18:50:44.389382  521996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem
	I0914 18:50:44.389406  521996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem (1679 bytes)
	I0914 18:50:44.389448  521996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem org=jenkins.functional-759345 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-759345]
	I0914 18:50:45.480410  521996 provision.go:172] copyRemoteCerts
	I0914 18:50:45.480465  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:50:45.480504  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:45.500868  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:45.599668  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:50:45.631197  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 18:50:45.660537  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:50:45.689217  521996 provision.go:86] duration metric: configureAuth took 1.318536338s
	I0914 18:50:45.689241  521996 ubuntu.go:193] setting minikube options for container-runtime
	I0914 18:50:45.689471  521996 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:50:45.689477  521996 machine.go:91] provisioned docker machine in 1.654879849s
	I0914 18:50:45.689483  521996 start.go:300] post-start starting for "functional-759345" (driver="docker")
	I0914 18:50:45.689493  521996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:50:45.689545  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:50:45.689580  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:45.709661  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:45.814891  521996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:50:45.820551  521996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 18:50:45.820605  521996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 18:50:45.820615  521996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 18:50:45.820622  521996 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 18:50:45.820631  521996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/addons for local assets ...
	I0914 18:50:45.820696  521996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/files for local assets ...
	I0914 18:50:45.820770  521996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem -> 4980292.pem in /etc/ssl/certs
	I0914 18:50:45.820852  521996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/test/nested/copy/498029/hosts -> hosts in /etc/test/nested/copy/498029
	I0914 18:50:45.820899  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/498029
	I0914 18:50:45.832370  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem --> /etc/ssl/certs/4980292.pem (1708 bytes)
	I0914 18:50:45.862206  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/test/nested/copy/498029/hosts --> /etc/test/nested/copy/498029/hosts (40 bytes)
	I0914 18:50:45.892276  521996 start.go:303] post-start completed in 202.777219ms
	I0914 18:50:45.892346  521996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:50:45.892388  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:45.910384  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:46.010408  521996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 18:50:46.017355  521996 fix.go:56] fixHost completed within 2.007047862s
	I0914 18:50:46.017369  521996 start.go:83] releasing machines lock for "functional-759345", held for 2.007091357s
	I0914 18:50:46.017446  521996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-759345
	I0914 18:50:46.035560  521996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:50:46.035603  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:46.035846  521996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:50:46.035889  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:46.058079  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:46.060861  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:50:46.286469  521996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:50:46.292491  521996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 18:50:46.298508  521996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 18:50:46.321686  521996 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 18:50:46.321757  521996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:50:46.332988  521996 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 18:50:46.333001  521996 start.go:469] detecting cgroup driver to use...
	I0914 18:50:46.333032  521996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 18:50:46.333088  521996 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 18:50:46.348433  521996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 18:50:46.362645  521996 docker.go:196] disabling cri-docker service (if available) ...
	I0914 18:50:46.362706  521996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:50:46.379268  521996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:50:46.393127  521996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:50:46.519813  521996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:50:46.648439  521996 docker.go:212] disabling docker service ...
	I0914 18:50:46.648497  521996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:50:46.667094  521996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:50:46.682207  521996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:50:46.810221  521996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:50:46.934642  521996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:50:46.949724  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:50:46.970367  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 18:50:46.983129  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 18:50:46.995930  521996 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 18:50:46.996007  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 18:50:47.009506  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:50:47.021923  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 18:50:47.034337  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:50:47.048136  521996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:50:47.060457  521996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 18:50:47.073505  521996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:50:47.084473  521996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:50:47.095380  521996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:50:47.220202  521996 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 18:50:47.435451  521996 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 18:50:47.435513  521996 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 18:50:47.440635  521996 start.go:537] Will wait 60s for crictl version
	I0914 18:50:47.440690  521996 ssh_runner.go:195] Run: which crictl
	I0914 18:50:47.445110  521996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:50:47.500373  521996 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.22
	RuntimeApiVersion:  v1
	I0914 18:50:47.500429  521996 ssh_runner.go:195] Run: containerd --version
	I0914 18:50:47.533973  521996 ssh_runner.go:195] Run: containerd --version
	I0914 18:50:47.574376  521996 out.go:177] * Preparing Kubernetes v1.28.1 on containerd 1.6.22 ...
	I0914 18:50:47.576188  521996 cli_runner.go:164] Run: docker network inspect functional-759345 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:50:47.593551  521996 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 18:50:47.600385  521996 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0914 18:50:47.602346  521996 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:50:47.602429  521996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:50:47.663782  521996 containerd.go:604] all images are preloaded for containerd runtime.
	I0914 18:50:47.663793  521996 containerd.go:518] Images already preloaded, skipping extraction
	I0914 18:50:47.663849  521996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:50:47.709563  521996 containerd.go:604] all images are preloaded for containerd runtime.
	I0914 18:50:47.709574  521996 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:50:47.709647  521996 ssh_runner.go:195] Run: sudo crictl info
	I0914 18:50:47.753147  521996 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0914 18:50:47.753169  521996 cni.go:84] Creating CNI manager for ""
	I0914 18:50:47.753175  521996 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:50:47.753183  521996 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 18:50:47.753201  521996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-759345 NodeName:functional-759345 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:50:47.753359  521996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-759345"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:50:47.753423  521996 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-759345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0914 18:50:47.753503  521996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 18:50:47.765115  521996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:50:47.765181  521996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:50:47.777187  521996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0914 18:50:47.798901  521996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:50:47.820529  521996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0914 18:50:47.842636  521996 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 18:50:47.847320  521996 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345 for IP: 192.168.49.2
	I0914 18:50:47.847342  521996 certs.go:190] acquiring lock for shared ca certs: {Name:mka5985e85be7ad08b440e022e8dd6d327029a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:50:47.847469  521996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key
	I0914 18:50:47.847504  521996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key
	I0914 18:50:47.847575  521996 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.key
	I0914 18:50:47.847619  521996 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/apiserver.key.dd3b5fb2
	I0914 18:50:47.847655  521996 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/proxy-client.key
	I0914 18:50:47.847778  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem (1338 bytes)
	W0914 18:50:47.847805  521996 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029_empty.pem, impossibly tiny 0 bytes
	I0914 18:50:47.847814  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:50:47.847837  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:50:47.847860  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:50:47.847885  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem (1679 bytes)
	I0914 18:50:47.847941  521996 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem (1708 bytes)
	I0914 18:50:47.848715  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 18:50:47.878526  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:50:47.908268  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:50:47.938593  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:50:47.967532  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:50:47.999408  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 18:50:48.048868  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:50:48.081205  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:50:48.113734  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:50:48.144456  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem --> /usr/share/ca-certificates/498029.pem (1338 bytes)
	I0914 18:50:48.175909  521996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem --> /usr/share/ca-certificates/4980292.pem (1708 bytes)
	I0914 18:50:48.205388  521996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:50:48.229353  521996 ssh_runner.go:195] Run: openssl version
	I0914 18:50:48.236433  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:50:48.248642  521996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:50:48.253350  521996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:50:48.253407  521996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:50:48.262282  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:50:48.273480  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/498029.pem && ln -fs /usr/share/ca-certificates/498029.pem /etc/ssl/certs/498029.pem"
	I0914 18:50:48.285008  521996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/498029.pem
	I0914 18:50:48.289762  521996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:49 /usr/share/ca-certificates/498029.pem
	I0914 18:50:48.289819  521996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/498029.pem
	I0914 18:50:48.298440  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/498029.pem /etc/ssl/certs/51391683.0"
	I0914 18:50:48.309112  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4980292.pem && ln -fs /usr/share/ca-certificates/4980292.pem /etc/ssl/certs/4980292.pem"
	I0914 18:50:48.321054  521996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4980292.pem
	I0914 18:50:48.326078  521996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:49 /usr/share/ca-certificates/4980292.pem
	I0914 18:50:48.326132  521996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4980292.pem
	I0914 18:50:48.334609  521996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4980292.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:50:48.345669  521996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 18:50:48.350088  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:50:48.358802  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:50:48.367530  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:50:48.375990  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:50:48.384504  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:50:48.393181  521996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:50:48.402938  521996 kubeadm.go:404] StartCluster: {Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:50:48.403026  521996 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 18:50:48.403088  521996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:50:48.454355  521996 cri.go:89] found id: "0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098"
	I0914 18:50:48.454368  521996 cri.go:89] found id: "9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f"
	I0914 18:50:48.454372  521996 cri.go:89] found id: "e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343"
	I0914 18:50:48.454375  521996 cri.go:89] found id: "b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81"
	I0914 18:50:48.454378  521996 cri.go:89] found id: "a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81"
	I0914 18:50:48.454382  521996 cri.go:89] found id: "8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da"
	I0914 18:50:48.454386  521996 cri.go:89] found id: "1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3"
	I0914 18:50:48.454389  521996 cri.go:89] found id: "2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64"
	I0914 18:50:48.454392  521996 cri.go:89] found id: "9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624"
	I0914 18:50:48.454398  521996 cri.go:89] found id: ""
	I0914 18:50:48.454453  521996 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0914 18:50:48.490161  521996 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","pid":2124,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0/rootfs","created":"2023-09-14T18:50:25.406413135Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-8gmx4_54060bf5-109d-46ae-9109-334e69e27e07","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-8gmx
4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"54060bf5-109d-46ae-9109-334e69e27e07"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098","pid":2949,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098/rootfs","created":"2023-09-14T18:50:42.555545836Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4525561a-da2
1-495e-b7d3-5515c83d50df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","pid":1674,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81/rootfs","created":"2023-09-14T18:50:11.761642689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4525561a-da21-495e-b7d3-5515c83d50df","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namesp
ace":"kube-system","io.kubernetes.cri.sandbox-uid":"4525561a-da21-495e-b7d3-5515c83d50df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3","pid":1320,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3/rootfs","created":"2023-09-14T18:49:50.232472523Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri.sandbox-id":"28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"36bdd136296b0d2b4232a27e95688fee"},"owner":"
root"},{"ociVersion":"1.0.2-dev","id":"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64","pid":1309,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64/rootfs","created":"2023-09-14T18:49:50.238080182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri.sandbox-id":"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b06718b4c7fa973ebc40bb50dcf6660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c3
91e","pid":1201,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e/rootfs","created":"2023-09-14T18:49:50.034952233Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-759345_36bdd136296b0d2b4232a27e95688fee","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"36bdd136296b0d2b4232a27e95688fee"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da","pid":1328,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da/rootfs","created":"2023-09-14T18:49:50.252838235Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri.sandbox-id":"e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2a868197f3e71fd109fc9a68b9758d0c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9063c389737122a8b353c82dbd780138b381290945ac4af856
e514687abb3624","pid":1257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624/rootfs","created":"2023-09-14T18:49:50.134582341Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","io.kubernetes.cri.sandbox-name":"etcd-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ced9b208564f27cb5f2c00ad557393d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","pid":1161,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/96a1a963d346417b99eeb
60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d/rootfs","created":"2023-09-14T18:49:49.970070596Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-759345_ced9b208564f27cb5f2c00ad557393d5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ced9b208564f27cb5f2c00ad557393d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f","pid":2151,"status":"running","bundle":"/run/conta
inerd/io.containerd.runtime.v2.task/k8s.io/9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f/rootfs","created":"2023-09-14T18:50:25.502775749Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-8gmx4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"54060bf5-109d-46ae-9109-334e69e27e07"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","pid":1807,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851/rootfs","created":"2023-09-14T18:50:11.983815624Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-th28x_33dc7ee0-d321-46ce-aa60-311175ef90f3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-th28x","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"33dc7ee0-d321-46ce-aa60-311175ef90f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81","pid":1853,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b941362db
a88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81/rootfs","created":"2023-09-14T18:50:12.131418599Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri.sandbox-id":"aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851","io.kubernetes.cri.sandbox-name":"kube-proxy-th28x","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"33dc7ee0-d321-46ce-aa60-311175ef90f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","pid":1175,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb/rootfs","created":"2023-09-14T18:49:49.99123745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-759345_1b06718b4c7fa973ebc40bb50dcf6660","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b06718b4c7fa973ebc40bb50dcf6660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470
b00d535167badf382a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a/rootfs","created":"2023-09-14T18:49:49.993455769Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-759345_2a868197f3e71fd109fc9a68b9758d0c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-759345","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2a868197f3e71fd109fc9a68b9758d0c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343","pid":2023,"status":"running","bundle
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343/rootfs","created":"2023-09-14T18:50:13.326378076Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri.sandbox-id":"f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026","io.kubernetes.cri.sandbox-name":"kindnet-lrpkn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c3084e0a-78b3-4888-bb8f-f70cc32083a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026","pid":1808,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f4
1417f4026","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026/rootfs","created":"2023-09-14T18:50:12.026330101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-lrpkn_c3084e0a-78b3-4888-bb8f-f70cc32083a7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-lrpkn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c3084e0a-78b3-4888-bb8f-f70cc32083a7"},"owner":"root"}]
	I0914 18:50:48.490448  521996 cri.go:126] list returned 16 containers
	I0914 18:50:48.490454  521996 cri.go:129] container: {ID:0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0 Status:running}
	I0914 18:50:48.490468  521996 cri.go:131] skipping 0a40c1201c6b145976de1410a101126969cb956edde27faed568b0b7c6a286a0 - not in ps
	I0914 18:50:48.490472  521996 cri.go:129] container: {ID:0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 Status:running}
	I0914 18:50:48.490478  521996 cri.go:135] skipping {0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 running}: state = "running", want "paused"
	I0914 18:50:48.490487  521996 cri.go:129] container: {ID:189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81 Status:running}
	I0914 18:50:48.490493  521996 cri.go:131] skipping 189fc092349814709cf37f5f9225278a9dae015ee9d01e8952b0b22bdb0fde81 - not in ps
	I0914 18:50:48.490497  521996 cri.go:129] container: {ID:1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 Status:running}
	I0914 18:50:48.490503  521996 cri.go:135] skipping {1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 running}: state = "running", want "paused"
	I0914 18:50:48.490508  521996 cri.go:129] container: {ID:2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 Status:running}
	I0914 18:50:48.490514  521996 cri.go:135] skipping {2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 running}: state = "running", want "paused"
	I0914 18:50:48.490520  521996 cri.go:129] container: {ID:28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e Status:running}
	I0914 18:50:48.490526  521996 cri.go:131] skipping 28082c135185c30ccfff67ff3b3201f2f97dd35ff8509d04d963d0c54c6c391e - not in ps
	I0914 18:50:48.490530  521996 cri.go:129] container: {ID:8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da Status:running}
	I0914 18:50:48.490535  521996 cri.go:135] skipping {8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da running}: state = "running", want "paused"
	I0914 18:50:48.490540  521996 cri.go:129] container: {ID:9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624 Status:running}
	I0914 18:50:48.490546  521996 cri.go:135] skipping {9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624 running}: state = "running", want "paused"
	I0914 18:50:48.490551  521996 cri.go:129] container: {ID:96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d Status:running}
	I0914 18:50:48.490559  521996 cri.go:131] skipping 96a1a963d346417b99eeb60461385f1f1a27cdf1bd87fb6849194db2a9fa623d - not in ps
	I0914 18:50:48.490563  521996 cri.go:129] container: {ID:9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f Status:running}
	I0914 18:50:48.490569  521996 cri.go:135] skipping {9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f running}: state = "running", want "paused"
	I0914 18:50:48.490574  521996 cri.go:129] container: {ID:aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851 Status:running}
	I0914 18:50:48.490579  521996 cri.go:131] skipping aff5f56a67e9e4288ea9f0ab07a9d1fea5bb62fa4988026fbadf335d3a592851 - not in ps
	I0914 18:50:48.490583  521996 cri.go:129] container: {ID:b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 Status:running}
	I0914 18:50:48.490589  521996 cri.go:135] skipping {b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 running}: state = "running", want "paused"
	I0914 18:50:48.490593  521996 cri.go:129] container: {ID:d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb Status:running}
	I0914 18:50:48.490599  521996 cri.go:131] skipping d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb - not in ps
	I0914 18:50:48.490603  521996 cri.go:129] container: {ID:e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a Status:running}
	I0914 18:50:48.490609  521996 cri.go:131] skipping e80fd32a4bd0a4aa879a56f94fcb6749167df8647bd470b00d535167badf382a - not in ps
	I0914 18:50:48.490613  521996 cri.go:129] container: {ID:e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 Status:running}
	I0914 18:50:48.490619  521996 cri.go:135] skipping {e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 running}: state = "running", want "paused"
	I0914 18:50:48.490623  521996 cri.go:129] container: {ID:f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026 Status:running}
	I0914 18:50:48.490629  521996 cri.go:131] skipping f04e4e2a517d603553eddfe7d434657b36399a0290426f2f41f80f41417f4026 - not in ps
	I0914 18:50:48.490683  521996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:50:48.501723  521996 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 18:50:48.501734  521996 kubeadm.go:636] restartCluster start
	I0914 18:50:48.501788  521996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:50:48.512345  521996 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:50:48.512925  521996 kubeconfig.go:92] found "functional-759345" server: "https://192.168.49.2:8441"
	I0914 18:50:48.514671  521996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:50:48.525799  521996 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-09-14 18:49:41.991318920 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-09-14 18:50:47.835456425 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0914 18:50:48.525808  521996 kubeadm.go:1128] stopping kube-system containers ...
	I0914 18:50:48.525818  521996 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0914 18:50:48.525874  521996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:50:48.568435  521996 cri.go:89] found id: "0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098"
	I0914 18:50:48.568447  521996 cri.go:89] found id: "9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f"
	I0914 18:50:48.568461  521996 cri.go:89] found id: "e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343"
	I0914 18:50:48.568465  521996 cri.go:89] found id: "b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81"
	I0914 18:50:48.568468  521996 cri.go:89] found id: "a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81"
	I0914 18:50:48.568473  521996 cri.go:89] found id: "8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da"
	I0914 18:50:48.568476  521996 cri.go:89] found id: "1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3"
	I0914 18:50:48.568489  521996 cri.go:89] found id: "2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64"
	I0914 18:50:48.568494  521996 cri.go:89] found id: "9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624"
	I0914 18:50:48.568500  521996 cri.go:89] found id: ""
	I0914 18:50:48.568504  521996 cri.go:234] Stopping containers: [0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624]
	I0914 18:50:48.568572  521996 ssh_runner.go:195] Run: which crictl
	I0914 18:50:48.573210  521996 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624
	I0914 18:50:53.846962  521996 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624: (5.273720689s)
	W0914 18:50:53.847015  521996 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098 9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343 b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81 a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81 8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da 1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3 2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624: Process exited with status 1
	stdout:
	0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098
	9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f
	e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343
	b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81
	
	stderr:
	E0914 18:50:53.843934    3441 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81\": not found" containerID="a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81"
	time="2023-09-14T18:50:53Z" level=fatal msg="stopping the container \"a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7d0af0420743bb5d93dbf7895676c8b93de0ebffaadb7eaf290028fa43f3c81\": not found"
	I0914 18:50:53.847076  521996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:50:53.917191  521996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:50:53.928254  521996 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 14 18:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 14 18:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 14 18:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 14 18:49 /etc/kubernetes/scheduler.conf
	
	I0914 18:50:53.928314  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0914 18:50:53.939924  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0914 18:50:53.951439  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0914 18:50:53.962872  521996 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:50:53.962943  521996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:50:53.973931  521996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0914 18:50:53.984761  521996 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:50:53.984818  521996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:50:53.995349  521996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:50:54.008876  521996 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 18:50:54.008892  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:54.078076  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.492408  521996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.414303118s)
	I0914 18:50:56.492430  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.736359  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.829024  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:56.921090  521996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:50:56.921153  521996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:50:56.936745  521996 api_server.go:72] duration metric: took 15.654613ms to wait for apiserver process to appear ...
	I0914 18:50:56.936758  521996 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:50:56.936773  521996 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0914 18:50:56.947201  521996 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0914 18:50:56.965309  521996 api_server.go:141] control plane version: v1.28.1
	I0914 18:50:56.965326  521996 api_server.go:131] duration metric: took 28.562381ms to wait for apiserver health ...
	I0914 18:50:56.965334  521996 cni.go:84] Creating CNI manager for ""
	I0914 18:50:56.965340  521996 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:50:56.968245  521996 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 18:50:56.970909  521996 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 18:50:56.976395  521996 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 18:50:56.976406  521996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 18:50:57.012302  521996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 18:50:57.504552  521996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:50:57.519513  521996 system_pods.go:59] 8 kube-system pods found
	I0914 18:50:57.519529  521996 system_pods.go:61] "coredns-5dd5756b68-8gmx4" [54060bf5-109d-46ae-9109-334e69e27e07] Running
	I0914 18:50:57.519534  521996 system_pods.go:61] "etcd-functional-759345" [1f30a799-d227-43e0-8599-81d36bac12c3] Running
	I0914 18:50:57.519538  521996 system_pods.go:61] "kindnet-lrpkn" [c3084e0a-78b3-4888-bb8f-f70cc32083a7] Running
	I0914 18:50:57.519543  521996 system_pods.go:61] "kube-apiserver-functional-759345" [650e7ed1-8a71-4907-95b4-b939c85b8b4d] Running
	I0914 18:50:57.519547  521996 system_pods.go:61] "kube-controller-manager-functional-759345" [bdfdc9e5-5fee-4d58-81fe-03b7cccef329] Running
	I0914 18:50:57.519551  521996 system_pods.go:61] "kube-proxy-th28x" [33dc7ee0-d321-46ce-aa60-311175ef90f3] Running
	I0914 18:50:57.519555  521996 system_pods.go:61] "kube-scheduler-functional-759345" [6b3625d9-1977-4c7c-b7d9-db355bb3836b] Running
	I0914 18:50:57.519563  521996 system_pods.go:61] "storage-provisioner" [4525561a-da21-495e-b7d3-5515c83d50df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:50:57.519570  521996 system_pods.go:74] duration metric: took 15.008777ms to wait for pod list to return data ...
	I0914 18:50:57.519577  521996 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:50:57.522915  521996 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 18:50:57.522933  521996 node_conditions.go:123] node cpu capacity is 2
	I0914 18:50:57.522943  521996 node_conditions.go:105] duration metric: took 3.361826ms to run NodePressure ...
	I0914 18:50:57.522959  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:50:57.743113  521996 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 18:50:57.748699  521996 retry.go:31] will retry after 212.268379ms: kubelet not initialised
	I0914 18:50:57.994491  521996 retry.go:31] will retry after 212.337309ms: kubelet not initialised
	I0914 18:50:58.223408  521996 kubeadm.go:787] kubelet initialised
	I0914 18:50:58.223417  521996 kubeadm.go:788] duration metric: took 480.292581ms waiting for restarted kubelet to initialise ...
	I0914 18:50:58.223426  521996 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:50:58.235226  521996 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261039  521996 pod_ready.go:97] error getting pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261055  521996 pod_ready.go:81] duration metric: took 1.025804117s waiting for pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.261065  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-8gmx4" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261148  521996 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261473  521996 pod_ready.go:97] error getting pod "etcd-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261484  521996 pod_ready.go:81] duration metric: took 328.747µs waiting for pod "etcd-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.261493  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261514  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261749  521996 pod_ready.go:97] error getting pod "kube-apiserver-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261756  521996 pod_ready.go:81] duration metric: took 234.659µs waiting for pod "kube-apiserver-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.261763  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261779  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.261990  521996 pod_ready.go:97] error getting pod "kube-controller-manager-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.261999  521996 pod_ready.go:81] duration metric: took 214.089µs waiting for pod "kube-controller-manager-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.262006  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262025  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-th28x" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.262242  521996 pod_ready.go:97] error getting pod "kube-proxy-th28x" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262251  521996 pod_ready.go:81] duration metric: took 219.086µs waiting for pod "kube-proxy-th28x" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.262257  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-th28x" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262275  521996 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-759345" in "kube-system" namespace to be "Ready" ...
	I0914 18:50:59.262428  521996 pod_ready.go:97] error getting pod "kube-scheduler-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262434  521996 pod_ready.go:81] duration metric: took 153.657µs waiting for pod "kube-scheduler-functional-759345" in "kube-system" namespace to be "Ready" ...
	E0914 18:50:59.262440  521996 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-759345" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.262457  521996 pod_ready.go:38] duration metric: took 1.039024058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:50:59.262474  521996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0914 18:50:59.272278  521996 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0914 18:50:59.272292  521996 kubeadm.go:640] restartCluster took 10.770552378s
	I0914 18:50:59.272298  521996 kubeadm.go:406] StartCluster complete in 10.869371087s
	I0914 18:50:59.272311  521996 settings.go:142] acquiring lock: {Name:mkfaf0f329c2736368d7fc21433e53e0c9a5b1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:50:59.272373  521996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:50:59.273063  521996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/kubeconfig: {Name:mk6a8e8b5c770de881617bb4e8ebf560fd4b6800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:50:59.273295  521996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 18:50:59.273606  521996 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:50:59.273725  521996 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 18:50:59.273782  521996 addons.go:69] Setting storage-provisioner=true in profile "functional-759345"
	I0914 18:50:59.273806  521996 addons.go:231] Setting addon storage-provisioner=true in "functional-759345"
	W0914 18:50:59.273811  521996 addons.go:240] addon storage-provisioner should already be in state true
	I0914 18:50:59.273874  521996 host.go:66] Checking if "functional-759345" exists ...
	I0914 18:50:59.274383  521996 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	I0914 18:50:59.274448  521996 addons.go:69] Setting default-storageclass=true in profile "functional-759345"
	I0914 18:50:59.274462  521996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-759345"
	I0914 18:50:59.274824  521996 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	W0914 18:50:59.275016  521996 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-759345" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:50:59.275028  521996 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.275064  521996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:50:59.277923  521996 out.go:177] * Verifying Kubernetes components...
	I0914 18:50:59.280238  521996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:50:59.319059  521996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0914 18:50:59.318692  521996 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	I0914 18:50:59.320760  521996 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:50:59.320769  521996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:50:59.320832  521996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:50:59.341007  521996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	E0914 18:50:59.405503  521996 start.go:882] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0914 18:50:59.405524  521996 start.go:294] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0914 18:50:59.405540  521996 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0914 18:50:59.405549  521996 node_ready.go:35] waiting up to 6m0s for node "functional-759345" to be "Ready" ...
	I0914 18:50:59.405881  521996 node_ready.go:53] error getting node "functional-759345": Get "https://192.168.49.2:8441/api/v1/nodes/functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	I0914 18:50:59.405891  521996 node_ready.go:38] duration metric: took 329.24µs waiting for node "functional-759345" to be "Ready" ...
	I0914 18:50:59.407559  521996 out.go:177] 
	W0914 18:50:59.409592  521996 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-759345": Get "https://192.168.49.2:8441/api/v1/nodes/functional-759345": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:50:59.409614  521996 out.go:239] * 
	W0914 18:50:59.410665  521996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:50:59.413720  521996 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1ba8309bf660e       04b4eaa3d3db8       4 seconds ago        Running             kindnet-cni               1                   f04e4e2a517d6       kindnet-lrpkn
	30ab48e06d49d       ba04bb24b9575       4 seconds ago        Running             storage-provisioner       2                   189fc09234981       storage-provisioner
	08745f5823081       b29fb62480892       4 seconds ago        Exited              kube-apiserver            1                   7ff4558e377ff       kube-apiserver-functional-759345
	b7a6c40efcf70       97e04611ad434       4 seconds ago        Running             coredns                   1                   0a40c1201c6b1       coredns-5dd5756b68-8gmx4
	69fc78d71f274       812f5241df7fd       4 seconds ago        Running             kube-proxy                1                   aff5f56a67e9e       kube-proxy-th28x
	0fb83152a87c7       ba04bb24b9575       20 seconds ago       Exited              storage-provisioner       1                   189fc09234981       storage-provisioner
	9c061c51a7996       97e04611ad434       37 seconds ago       Exited              coredns                   0                   0a40c1201c6b1       coredns-5dd5756b68-8gmx4
	e94bc3a965652       04b4eaa3d3db8       49 seconds ago       Exited              kindnet-cni               0                   f04e4e2a517d6       kindnet-lrpkn
	b941362dba88e       812f5241df7fd       51 seconds ago       Exited              kube-proxy                0                   aff5f56a67e9e       kube-proxy-th28x
	8770ffd08048f       8b6e1980b7584       About a minute ago   Running             kube-controller-manager   0                   e80fd32a4bd0a       kube-controller-manager-functional-759345
	1d92ce7b37e23       b4a5a57e99492       About a minute ago   Running             kube-scheduler            0                   28082c135185c       kube-scheduler-functional-759345
	9063c38973712       9cdd6470f48c8       About a minute ago   Running             etcd                      0                   96a1a963d3464       etcd-functional-759345
	
	* 
	* ==> containerd <==
	* Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.570737965Z" level=info msg="StartContainer for \"1ba8309bf660e9c8a958e80210f114290ccd4063c5ae2ef82b9b9d50d130f591\" returns successfully"
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.667718918Z" level=info msg="StartContainer for \"30ab48e06d49d30e0f4194ba432a102e542eac20b5c70a8d57e148c0fd04ec2f\" returns successfully"
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.699694522Z" level=info msg="shim disconnected" id=08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.699843684Z" level=warning msg="cleaning up after shim disconnected" id=08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc namespace=k8s.io
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.699856443Z" level=info msg="cleaning up dead shim"
	Sep 14 18:50:58 functional-759345 containerd[3247]: time="2023-09-14T18:50:58.716640968Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:50:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3994 runtime=io.containerd.runc.v2\n"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.059888045Z" level=info msg="StopContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" with timeout 2 (s)"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.060224209Z" level=info msg="Stop container \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" with signal terminated"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.146573376Z" level=info msg="shim disconnected" id=d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.146638000Z" level=warning msg="cleaning up after shim disconnected" id=d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb namespace=k8s.io
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.146648995Z" level=info msg="cleaning up dead shim"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.152213811Z" level=info msg="shim disconnected" id=2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.152275563Z" level=warning msg="cleaning up after shim disconnected" id=2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64 namespace=k8s.io
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.152287198Z" level=info msg="cleaning up dead shim"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.165925971Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4167 runtime=io.containerd.runc.v2\n"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.169183781Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.169640348Z" level=info msg="StopContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" returns successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.170734061Z" level=info msg="StopPodSandbox for \"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.170815021Z" level=info msg="Container to stop \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.173005539Z" level=info msg="TearDown network for sandbox \"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb\" successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.173044185Z" level=info msg="StopPodSandbox for \"d83594343c1eacf4e1cd69ea641ab95161fde942fed420be9006a8af355fdfcb\" returns successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.182845086Z" level=info msg="RemoveContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.189787144Z" level=info msg="RemoveContainer for \"2239c3e0d01dedf5fc15c745ed9a2ce67252db0a74fc51a9653e6a3ebf238a64\" returns successfully"
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.195328641Z" level=info msg="RemoveContainer for \"518dce08aac7ba0303423c550d12528f9903c08bc2fddd7c465096541c454a51\""
	Sep 14 18:50:59 functional-759345 containerd[3247]: time="2023-09-14T18:50:59.201197417Z" level=info msg="RemoveContainer for \"518dce08aac7ba0303423c550d12528f9903c08bc2fddd7c465096541c454a51\" returns successfully"
	
	* 
	* ==> coredns [9c061c51a79966d17fcca985633ee5fde7f8b3a2ee2b69521099509d4ac5a27f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46435 - 35346 "HINFO IN 2418697808068076682.3086629869597095228. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03713899s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b7a6c40efcf707c80108e5ce7ae4fff5b4f1c4fe03ec48227b97f39c5f2c9ab4] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45574 - 8669 "HINFO IN 2546873242402138308.1549916675134106726. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01409644s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=492": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000738] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001019] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000e6d70ae1
	[  +0.001093] FS-Cache: N-key=[8] '943a5c0100000000'
	[  +0.020369] FS-Cache: Duplicate cookie detected
	[  +0.000880] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000007cc3b60a
	[  +0.001158] FS-Cache: O-key=[8] '943a5c0100000000'
	[  +0.000773] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001149] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=000000009e94c0ae
	[  +0.001230] FS-Cache: N-key=[8] '943a5c0100000000'
	[  +2.856088] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000000e997ae2
	[  +0.001081] FS-Cache: O-key=[8] '933a5c0100000000'
	[  +0.000711] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000e6d70ae1
	[  +0.001069] FS-Cache: N-key=[8] '933a5c0100000000'
	[  +0.399166] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=0000000095ffa149
	[  +0.001108] FS-Cache: O-key=[8] '993a5c0100000000'
	[  +0.000739] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=0000000080474072
	[  +0.001122] FS-Cache: N-key=[8] '993a5c0100000000'
	[ +10.571489] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [9063c389737122a8b353c82dbd780138b381290945ac4af856e514687abb3624] <==
	* {"level":"info","ts":"2023-09-14T18:49:50.243434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-09-14T18:49:50.243558Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-09-14T18:49:50.244092Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T18:49:50.244228Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-09-14T18:49:50.244247Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-09-14T18:49:50.244971Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T18:49:50.24504Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T18:49:50.416134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T18:49:50.416364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T18:49:50.416534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-09-14T18:49:50.416665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.416766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.416873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.416957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-09-14T18:49:50.418734Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:49:50.419103Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-759345 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T18:49:50.419232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T18:49:50.420555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T18:49:50.433552Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T18:49:50.433746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T18:49:50.420759Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T18:49:50.438605Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-09-14T18:49:50.421136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:49:50.439123Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T18:49:50.439253Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:51:03 up  4:33,  0 users,  load average: 2.17, 1.82, 1.34
	Linux functional-759345 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1ba8309bf660e9c8a958e80210f114290ccd4063c5ae2ef82b9b9d50d130f591] <==
	* I0914 18:50:58.637829       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 18:50:58.638100       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0914 18:50:58.638356       1 main.go:116] setting mtu 1500 for CNI 
	I0914 18:50:58.638464       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 18:50:58.638555       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 18:50:59.027645       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:59.028541       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [e94bc3a965652b79b33b80ffae0165217887edeaa648b08f83fdec685d670343] <==
	* I0914 18:50:13.428252       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 18:50:13.428321       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0914 18:50:13.428542       1 main.go:116] setting mtu 1500 for CNI 
	I0914 18:50:13.428565       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 18:50:13.428623       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 18:50:13.925827       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:13.925858       1 main.go:227] handling current node
	I0914 18:50:23.941160       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:23.941193       1 main.go:227] handling current node
	I0914 18:50:33.952981       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:33.953008       1 main.go:227] handling current node
	I0914 18:50:43.965247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:50:43.965284       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc] <==
	* I0914 18:50:58.653236       1 options.go:220] external host was not specified, using 192.168.49.2
	I0914 18:50:58.654424       1 server.go:148] Version: v1.28.1
	I0914 18:50:58.654453       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0914 18:50:58.654708       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [8770ffd08048f9f173137ce099d55b4076b9b98377d7951dce31ff241a3637da] <==
	* I0914 18:50:10.134309       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wrlqg"
	I0914 18:50:10.171546       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8gmx4"
	I0914 18:50:10.188048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="274.219281ms"
	I0914 18:50:10.213409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.27746ms"
	I0914 18:50:10.213585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.867µs"
	I0914 18:50:10.221169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.699µs"
	I0914 18:50:10.277168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.273µs"
	I0914 18:50:10.290617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.252µs"
	I0914 18:50:10.376051       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0914 18:50:10.401175       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wrlqg"
	I0914 18:50:10.423094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.886598ms"
	I0914 18:50:10.459742       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 18:50:10.462963       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 18:50:10.462992       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0914 18:50:10.480397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.16597ms"
	I0914 18:50:10.480681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.257µs"
	I0914 18:50:10.480834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.642µs"
	I0914 18:50:12.428148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.344µs"
	I0914 18:50:12.435711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.067µs"
	I0914 18:50:12.445350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="190.925µs"
	I0914 18:50:26.438919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.173µs"
	I0914 18:50:26.468455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.56249ms"
	I0914 18:50:26.468560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.441µs"
	I0914 18:50:57.942505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.323719ms"
	I0914 18:50:57.942655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.648µs"
	
	* 
	* ==> kube-proxy [69fc78d71f2742c1007548b03bf0bc1359bb50f645cc95a6f3d69a87a928464b] <==
	* I0914 18:50:58.778781       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:50:58.779808       1 config.go:188] "Starting service config controller"
	I0914 18:50:58.779937       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 18:50:58.779979       1 config.go:97] "Starting endpoint slice config controller"
	I0914 18:50:58.779988       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 18:50:58.781015       1 config.go:315] "Starting node config controller"
	I0914 18:50:58.781037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 18:50:58.881070       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 18:50:58.881150       1 shared_informer.go:318] Caches are synced for node config
	I0914 18:50:58.881330       1 shared_informer.go:318] Caches are synced for service config
	W0914 18:50:59.112333       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0914 18:50:59.112380       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0914 18:50:59.112400       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received
	W0914 18:51:00.022817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-759345&resourceVersion=484": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:00.022875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-759345&resourceVersion=484": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:00.192876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:00.193027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:00.528866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:00.528925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:02.133404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:02.133455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:02.228024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-759345&resourceVersion=484": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:02.228139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-759345&resourceVersion=484": dial tcp 192.168.49.2:8441: connect: connection refused
	W0914 18:51:02.675661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	E0914 18:51:02.675707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=470": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-proxy [b941362dba88e92d4607433a39dd47594a4083a815bd32d4a6d115603ca1de81] <==
	* I0914 18:50:12.194628       1 server_others.go:69] "Using iptables proxy"
	I0914 18:50:12.209124       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0914 18:50:12.243944       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 18:50:12.246551       1 server_others.go:152] "Using iptables Proxier"
	I0914 18:50:12.246739       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 18:50:12.246832       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 18:50:12.246952       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 18:50:12.247314       1 server.go:846] "Version info" version="v1.28.1"
	I0914 18:50:12.247646       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:50:12.248786       1 config.go:188] "Starting service config controller"
	I0914 18:50:12.249123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 18:50:12.249292       1 config.go:97] "Starting endpoint slice config controller"
	I0914 18:50:12.249403       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 18:50:12.251534       1 config.go:315] "Starting node config controller"
	I0914 18:50:12.251678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 18:50:12.349620       1 shared_informer.go:318] Caches are synced for service config
	I0914 18:50:12.349696       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 18:50:12.351966       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1d92ce7b37e238ff86b6c12e540c8590c9d516ca2f0494bf295a46c01f7985a3] <==
	* W0914 18:49:54.922773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:49:54.923291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 18:49:54.922820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:49:54.923569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 18:49:54.923930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:49:54.924112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 18:49:54.924380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:49:54.924535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 18:49:54.924848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:49:54.925049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 18:49:54.928631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:49:54.928938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 18:49:54.929250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:49:54.929453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 18:49:54.929686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 18:49:54.929847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 18:49:54.930081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:49:54.930282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 18:49:54.930559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 18:49:54.930719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 18:49:54.930949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:49:54.931121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 18:49:54.931604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:49:54.932416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0914 18:49:56.113910       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.213193    3629 status_manager.go:853] "Failed to get status for pod" podUID="36bdd136296b0d2b4232a27e95688fee" pod="kube-system/kube-scheduler-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.213574    3629 status_manager.go:853] "Failed to get status for pod" podUID="4525561a-da21-495e-b7d3-5515c83d50df" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.213910    3629 status_manager.go:853] "Failed to get status for pod" podUID="c3084e0a-78b3-4888-bb8f-f70cc32083a7" pod="kube-system/kindnet-lrpkn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-lrpkn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.214251    3629 status_manager.go:853] "Failed to get status for pod" podUID="33dc7ee0-d321-46ce-aa60-311175ef90f3" pod="kube-system/kube-proxy-th28x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.214564    3629 status_manager.go:853] "Failed to get status for pod" podUID="54060bf5-109d-46ae-9109-334e69e27e07" pod="kube-system/coredns-5dd5756b68-8gmx4" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: E0914 18:51:00.643772    3629 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-759345.1784d88ccde2f7b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-759345", UID:"1b06718b4c7fa973ebc40bb50dcf6660", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Stopping container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Hos
t:"functional-759345"}, FirstTimestamp:time.Date(2023, time.September, 14, 18, 50, 59, 59374009, time.Local), LastTimestamp:time.Date(2023, time.September, 14, 18, 50, 59, 59374009, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-759345"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.857156    3629 status_manager.go:853] "Failed to get status for pod" podUID="36bdd136296b0d2b4232a27e95688fee" pod="kube-system/kube-scheduler-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.857648    3629 status_manager.go:853] "Failed to get status for pod" podUID="4525561a-da21-495e-b7d3-5515c83d50df" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.857958    3629 status_manager.go:853] "Failed to get status for pod" podUID="c3084e0a-78b3-4888-bb8f-f70cc32083a7" pod="kube-system/kindnet-lrpkn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-lrpkn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.858303    3629 status_manager.go:853] "Failed to get status for pod" podUID="33dc7ee0-d321-46ce-aa60-311175ef90f3" pod="kube-system/kube-proxy-th28x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.858659    3629 status_manager.go:853] "Failed to get status for pod" podUID="54060bf5-109d-46ae-9109-334e69e27e07" pod="kube-system/coredns-5dd5756b68-8gmx4" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.859008    3629 status_manager.go:853] "Failed to get status for pod" podUID="ced9b208564f27cb5f2c00ad557393d5" pod="kube-system/etcd-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:00 functional-759345 kubelet[3629]: I0914 18:51:00.859332    3629 status_manager.go:853] "Failed to get status for pod" podUID="18070792e98a31783321ccb1c8fa0250" pod="kube-system/kube-apiserver-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.058753    3629 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1b06718b4c7fa973ebc40bb50dcf6660" path="/var/lib/kubelet/pods/1b06718b4c7fa973ebc40bb50dcf6660/volumes"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.213113    3629 scope.go:117] "RemoveContainer" containerID="08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: E0914 18:51:01.213697    3629 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-759345_kube-system(18070792e98a31783321ccb1c8fa0250)\"" pod="kube-system/kube-apiserver-functional-759345" podUID="18070792e98a31783321ccb1c8fa0250"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.213841    3629 status_manager.go:853] "Failed to get status for pod" podUID="54060bf5-109d-46ae-9109-334e69e27e07" pod="kube-system/coredns-5dd5756b68-8gmx4" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8gmx4\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.215199    3629 status_manager.go:853] "Failed to get status for pod" podUID="ced9b208564f27cb5f2c00ad557393d5" pod="kube-system/etcd-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.215633    3629 status_manager.go:853] "Failed to get status for pod" podUID="18070792e98a31783321ccb1c8fa0250" pod="kube-system/kube-apiserver-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.216009    3629 status_manager.go:853] "Failed to get status for pod" podUID="36bdd136296b0d2b4232a27e95688fee" pod="kube-system/kube-scheduler-functional-759345" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-759345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.216363    3629 status_manager.go:853] "Failed to get status for pod" podUID="4525561a-da21-495e-b7d3-5515c83d50df" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.216746    3629 status_manager.go:853] "Failed to get status for pod" podUID="c3084e0a-78b3-4888-bb8f-f70cc32083a7" pod="kube-system/kindnet-lrpkn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kindnet-lrpkn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:01 functional-759345 kubelet[3629]: I0914 18:51:01.217086    3629 status_manager.go:853] "Failed to get status for pod" podUID="33dc7ee0-d321-46ce-aa60-311175ef90f3" pod="kube-system/kube-proxy-th28x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-th28x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 14 18:51:02 functional-759345 kubelet[3629]: I0914 18:51:02.215015    3629 scope.go:117] "RemoveContainer" containerID="08745f582308184f4f1fb529b10ce709037202a2315f3020984e93e255f2f0fc"
	Sep 14 18:51:02 functional-759345 kubelet[3629]: E0914 18:51:02.216363    3629 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-759345_kube-system(18070792e98a31783321ccb1c8fa0250)\"" pod="kube-system/kube-apiserver-functional-759345" podUID="18070792e98a31783321ccb1c8fa0250"
	
	* 
	* ==> storage-provisioner [0fb83152a87c7aca322cc2f188d85d65001b38f44ceab3251eb4e33dd626d098] <==
	* I0914 18:50:42.598792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:50:42.618921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:50:42.619064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:50:42.629049       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:50:42.631036       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-759345_5c285082-4544-4263-bda2-0f54cd004cbc!
	I0914 18:50:42.631148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bccc3ad9-bd5f-4e25-8328-902a0a5d0e29", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-759345_5c285082-4544-4263-bda2-0f54cd004cbc became leader
	I0914 18:50:42.731609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-759345_5c285082-4544-4263-bda2-0f54cd004cbc!
	
	* 
	* ==> storage-provisioner [30ab48e06d49d30e0f4194ba432a102e542eac20b5c70a8d57e148c0fd04ec2f] <==
	* I0914 18:50:58.704878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:50:58.737468       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:50:58.737558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0914 18:51:02.196358       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:51:03.309943  523729 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-759345 -n functional-759345
E0914 18:51:04.126914  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:04.133105  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:04.143341  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:04.163645  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:04.203960  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:04.284293  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-759345 -n functional-759345: exit status 2 (369.906901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-759345" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (2.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 logs --file /tmp/TestFunctionalserialLogsFileCmd1528097383/001/logs.txt
E0914 18:51:06.685452  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 logs --file /tmp/TestFunctionalserialLogsFileCmd1528097383/001/logs.txt: (1.660728794s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:51:06.925165  524210 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-759345 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-759345 apply -f testdata/invalidsvc.yaml: exit status 1 (71.112255ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-759345 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 119. stderr: I0914 18:51:08.961721  524716 out.go:296] Setting OutFile to fd 1 ...
I0914 18:51:08.961911  524716 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:08.961937  524716 out.go:309] Setting ErrFile to fd 2...
I0914 18:51:08.961956  524716 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:08.962248  524716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
I0914 18:51:08.962603  524716 mustload.go:65] Loading cluster: functional-759345
I0914 18:51:08.963020  524716 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:51:08.963628  524716 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
I0914 18:51:08.987742  524716 host.go:66] Checking if "functional-759345" exists ...
I0914 18:51:08.988046  524716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0914 18:51:09.139217  524716 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-14 18:51:09.126536137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
I0914 18:51:09.139392  524716 api_server.go:166] Checking apiserver status ...
I0914 18:51:09.139465  524716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 18:51:09.139506  524716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
I0914 18:51:09.228406  524716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
W0914 18:51:09.354727  524716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0914 18:51:09.362218  524716 out.go:177] * This control plane is not running! (state=Stopped)
W0914 18:51:09.365128  524716 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p functional-759345"
! This is unusual - you may want to investigate using "minikube logs -p functional-759345"
I0914 18:51:09.367595  524716 out.go:177]   To start a cluster, run: "minikube start -p functional-759345"

                                                
                                                
stdout: * This control plane is not running! (state=Stopped)
To start a cluster, run: "minikube start -p functional-759345"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 524715: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-759345 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-759345 apply -f testdata/testsvc.yaml: exit status 1 (94.474636ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-759345 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-759345 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-759345 get svc nginx-svc: exit status 1 (83.696937ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): services "nginx-svc" not found

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-759345 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image load --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr
2023/09/14 18:53:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 image load --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr: (4.33950874s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-759345" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image load --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 image load --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr: (3.467251942s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-759345" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.377326535s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-759345
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image load --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 image load --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr: (3.090333662s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-759345" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image save gcr.io/google-containers/addon-resizer:functional-759345 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0914 18:53:25.477484  529329 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:53:25.478160  529329 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:25.478204  529329 out.go:309] Setting ErrFile to fd 2...
	I0914 18:53:25.478227  529329 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:25.478505  529329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:53:25.479214  529329 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:53:25.479428  529329 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:53:25.479983  529329 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
	I0914 18:53:25.498522  529329 ssh_runner.go:195] Run: systemctl --version
	I0914 18:53:25.498617  529329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
	I0914 18:53:25.517770  529329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
	I0914 18:53:25.614486  529329 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0914 18:53:25.614569  529329 cache_images.go:254] Failed to load cached images for profile functional-759345. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0914 18:53:25.614587  529329 cache_images.go:262] succeeded pushing to: 
	I0914 18:53:25.614591  529329 cache_images.go:263] failed pushing to: functional-759345

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.99s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-480282 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-480282 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.563438573s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-480282 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-480282 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d157a3e0-33eb-4b62-94f3-d866b1704f3a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d157a3e0-33eb-4b62-94f3-d866b1704f3a] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.016341587s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-480282 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.007588347s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons disable ingress-dns --alsologtostderr -v=1: (2.880464753s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons disable ingress --alsologtostderr -v=1
E0914 18:56:04.127050  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons disable ingress --alsologtostderr -v=1: (7.632324652s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-480282
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-480282:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b4f0e92cea51c653719ba04defd6cab21fd6fb16bd49d1fef0c7cfccd5f3c8",
	        "Created": "2023-09-14T18:53:48.756794695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 530561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T18:53:49.114996633Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d5e38ecae883e5d7fbaaccc26de9209a95c7f11864ba7a4058d1702f044efe72",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b4f0e92cea51c653719ba04defd6cab21fd6fb16bd49d1fef0c7cfccd5f3c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b4f0e92cea51c653719ba04defd6cab21fd6fb16bd49d1fef0c7cfccd5f3c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b4f0e92cea51c653719ba04defd6cab21fd6fb16bd49d1fef0c7cfccd5f3c8/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b4f0e92cea51c653719ba04defd6cab21fd6fb16bd49d1fef0c7cfccd5f3c8/e5b4f0e92cea51c653719ba04defd6cab21fd6fb16bd49d1fef0c7cfccd5f3c8-json.log",
	        "Name": "/ingress-addon-legacy-480282",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-480282:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-480282",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bff4e9add48935542c951f040b7d99a293ac055821d128d93c43bf0e3fb92ab6-init/diff:/var/lib/docker/overlay2/b22941fdffad93645039179e8c1eee3cd74765d1689d400cab1ec16e85e4dbbf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bff4e9add48935542c951f040b7d99a293ac055821d128d93c43bf0e3fb92ab6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bff4e9add48935542c951f040b7d99a293ac055821d128d93c43bf0e3fb92ab6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bff4e9add48935542c951f040b7d99a293ac055821d128d93c43bf0e3fb92ab6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-480282",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-480282/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-480282",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-480282",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-480282",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b03221b2367c9fcf142f3295732137bba8a3e92355e1ef7b0a1d72f448ce47ef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b03221b2367c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-480282": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e5b4f0e92cea",
	                        "ingress-addon-legacy-480282"
	                    ],
	                    "NetworkID": "8debbc67a59267eeac5fc941f1010d357f7a31712ec23b8febeae47425511db6",
	                    "EndpointID": "b8c77162b7443cf435e948d7ed0784ed53a3f9d6590c3c94ad0282c2bc1c1a1e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-480282 -n ingress-addon-legacy-480282
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 logs -n 25
E0914 18:56:09.444510  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:09.449815  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:09.460067  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:09.480331  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:09.520628  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:09.600869  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:09.761125  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:10.081742  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480282 logs -n 25: (1.474910942s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-759345                                                            | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-759345                                                            | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-759345 image ls                                                   | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	| image          | functional-759345 image load --daemon                                        | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-759345                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345 image ls                                                   | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	| image          | functional-759345 image save                                                 | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-759345                     |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345 image rm                                                   | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-759345                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345 image ls                                                   | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	| image          | functional-759345 image load                                                 | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345 image save --daemon                                        | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-759345                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345                                                            | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345                                                            | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-759345 ssh pgrep                                                  | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-759345                                                            | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345                                                            | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-759345 image build -t                                             | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	|                | localhost/my-image:functional-759345                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-759345 image ls                                                   | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	| delete         | -p functional-759345                                                         | functional-759345           | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:53 UTC |
	| start          | -p ingress-addon-legacy-480282                                               | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:53 UTC | 14 Sep 23 18:55 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-480282                                                  | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:55 UTC | 14 Sep 23 18:55 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-480282                                                  | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:55 UTC | 14 Sep 23 18:55 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-480282                                                  | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:55 UTC | 14 Sep 23 18:55 UTC |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-480282 ip                                               | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:55 UTC | 14 Sep 23 18:55 UTC |
	| addons         | ingress-addon-legacy-480282                                                  | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:55 UTC | 14 Sep 23 18:56 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-480282                                                  | ingress-addon-legacy-480282 | jenkins | v1.31.2 | 14 Sep 23 18:56 UTC | 14 Sep 23 18:56 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:53:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:53:32.479871  530104 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:53:32.480076  530104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:32.480087  530104 out.go:309] Setting ErrFile to fd 2...
	I0914 18:53:32.480093  530104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:32.480379  530104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:53:32.480887  530104 out.go:303] Setting JSON to false
	I0914 18:53:32.482161  530104 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16556,"bootTime":1694701057,"procs":389,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:53:32.482240  530104 start.go:138] virtualization:  
	I0914 18:53:32.485375  530104 out.go:177] * [ingress-addon-legacy-480282] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:53:32.487750  530104 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:53:32.490407  530104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:53:32.487906  530104 notify.go:220] Checking for updates...
	I0914 18:53:32.494604  530104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:53:32.496879  530104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:53:32.499280  530104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:53:32.501582  530104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:53:32.503662  530104 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:53:32.528345  530104 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:53:32.528472  530104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:53:32.609618  530104 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-14 18:53:32.598853586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:53:32.609725  530104 docker.go:294] overlay module found
	I0914 18:53:32.612346  530104 out.go:177] * Using the docker driver based on user configuration
	I0914 18:53:32.614777  530104 start.go:298] selected driver: docker
	I0914 18:53:32.614797  530104 start.go:902] validating driver "docker" against <nil>
	I0914 18:53:32.614812  530104 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:53:32.615534  530104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:53:32.679505  530104 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-14 18:53:32.669588002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:53:32.679665  530104 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 18:53:32.679906  530104 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:53:32.682336  530104 out.go:177] * Using Docker driver with root privileges
	I0914 18:53:32.684476  530104 cni.go:84] Creating CNI manager for ""
	I0914 18:53:32.684500  530104 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:53:32.684513  530104 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 18:53:32.684533  530104 start_flags.go:321] config:
	{Name:ingress-addon-legacy-480282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480282 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:53:32.686963  530104 out.go:177] * Starting control plane node ingress-addon-legacy-480282 in cluster ingress-addon-legacy-480282
	I0914 18:53:32.689226  530104 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0914 18:53:32.691139  530104 out.go:177] * Pulling base image ...
	I0914 18:53:32.693315  530104 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0914 18:53:32.693348  530104 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0914 18:53:32.711559  530104 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0914 18:53:32.711583  530104 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0914 18:53:32.756944  530104 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0914 18:53:32.756980  530104 cache.go:57] Caching tarball of preloaded images
	I0914 18:53:32.757695  530104 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0914 18:53:32.759968  530104 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0914 18:53:32.762071  530104 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:53:32.871914  530104 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0914 18:53:40.753489  530104 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:53:40.753594  530104 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:53:41.946937  530104 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0914 18:53:41.947322  530104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/config.json ...
	I0914 18:53:41.947357  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/config.json: {Name:mk175b5040f948c4a73bfbba261e0c4cb8e59863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:53:41.947542  530104 cache.go:195] Successfully downloaded all kic artifacts
	I0914 18:53:41.947565  530104 start.go:365] acquiring machines lock for ingress-addon-legacy-480282: {Name:mkee14024659573ca126f88daaf2eae2f1d69ddc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:53:41.948098  530104 start.go:369] acquired machines lock for "ingress-addon-legacy-480282" in 516.514µs
	I0914 18:53:41.948130  530104 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-480282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480282 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:53:41.948211  530104 start.go:125] createHost starting for "" (driver="docker")
	I0914 18:53:41.951015  530104 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0914 18:53:41.951303  530104 start.go:159] libmachine.API.Create for "ingress-addon-legacy-480282" (driver="docker")
	I0914 18:53:41.951332  530104 client.go:168] LocalClient.Create starting
	I0914 18:53:41.951428  530104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem
	I0914 18:53:41.951468  530104 main.go:141] libmachine: Decoding PEM data...
	I0914 18:53:41.951487  530104 main.go:141] libmachine: Parsing certificate...
	I0914 18:53:41.951548  530104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem
	I0914 18:53:41.951571  530104 main.go:141] libmachine: Decoding PEM data...
	I0914 18:53:41.951583  530104 main.go:141] libmachine: Parsing certificate...
	I0914 18:53:41.951947  530104 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-480282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 18:53:41.969394  530104 cli_runner.go:211] docker network inspect ingress-addon-legacy-480282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 18:53:41.969486  530104 network_create.go:281] running [docker network inspect ingress-addon-legacy-480282] to gather additional debugging logs...
	I0914 18:53:41.969507  530104 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-480282
	W0914 18:53:41.986574  530104 cli_runner.go:211] docker network inspect ingress-addon-legacy-480282 returned with exit code 1
	I0914 18:53:41.986624  530104 network_create.go:284] error running [docker network inspect ingress-addon-legacy-480282]: docker network inspect ingress-addon-legacy-480282: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-480282 not found
	I0914 18:53:41.986639  530104 network_create.go:286] output of [docker network inspect ingress-addon-legacy-480282]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-480282 not found
	
	** /stderr **
	I0914 18:53:41.986705  530104 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:53:42.011549  530104 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000dd54b0}
	I0914 18:53:42.011591  530104 network_create.go:123] attempt to create docker network ingress-addon-legacy-480282 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 18:53:42.011657  530104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-480282 ingress-addon-legacy-480282
	I0914 18:53:42.094758  530104 network_create.go:107] docker network ingress-addon-legacy-480282 192.168.49.0/24 created
	I0914 18:53:42.094795  530104 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-480282" container
	I0914 18:53:42.094928  530104 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 18:53:42.113793  530104 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-480282 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480282 --label created_by.minikube.sigs.k8s.io=true
	I0914 18:53:42.136719  530104 oci.go:103] Successfully created a docker volume ingress-addon-legacy-480282
	I0914 18:53:42.136824  530104 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-480282-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480282 --entrypoint /usr/bin/test -v ingress-addon-legacy-480282:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0914 18:53:43.640912  530104 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-480282-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480282 --entrypoint /usr/bin/test -v ingress-addon-legacy-480282:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib: (1.504039451s)
	I0914 18:53:43.640945  530104 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-480282
	I0914 18:53:43.640966  530104 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0914 18:53:43.640986  530104 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 18:53:43.641072  530104 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-480282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 18:53:48.679874  530104 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-480282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (5.038756374s)
	I0914 18:53:48.679905  530104 kic.go:199] duration metric: took 5.038916 seconds to extract preloaded images to volume
	W0914 18:53:48.680047  530104 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 18:53:48.680157  530104 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 18:53:48.741102  530104 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-480282 --name ingress-addon-legacy-480282 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480282 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-480282 --network ingress-addon-legacy-480282 --ip 192.168.49.2 --volume ingress-addon-legacy-480282:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0914 18:53:49.124318  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Running}}
	I0914 18:53:49.149156  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Status}}
	I0914 18:53:49.178570  530104 cli_runner.go:164] Run: docker exec ingress-addon-legacy-480282 stat /var/lib/dpkg/alternatives/iptables
	I0914 18:53:49.251923  530104 oci.go:144] the created container "ingress-addon-legacy-480282" has a running status.
	I0914 18:53:49.251955  530104 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa...
	I0914 18:53:49.588456  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0914 18:53:49.588507  530104 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 18:53:49.620297  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Status}}
	I0914 18:53:49.645046  530104 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 18:53:49.645071  530104 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-480282 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 18:53:49.755369  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Status}}
	I0914 18:53:49.785864  530104 machine.go:88] provisioning docker machine ...
	I0914 18:53:49.785918  530104 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-480282"
	I0914 18:53:49.786010  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:49.822148  530104 main.go:141] libmachine: Using SSH client type: native
	I0914 18:53:49.823431  530104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33412 <nil> <nil>}
	I0914 18:53:49.823455  530104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-480282 && echo "ingress-addon-legacy-480282" | sudo tee /etc/hostname
	I0914 18:53:49.824202  530104 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0914 18:53:52.974904  530104 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-480282
	
	I0914 18:53:52.975058  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:52.993059  530104 main.go:141] libmachine: Using SSH client type: native
	I0914 18:53:52.993526  530104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33412 <nil> <nil>}
	I0914 18:53:52.993550  530104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-480282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-480282/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-480282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:53:53.133754  530104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:53:53.133781  530104 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17217-492678/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-492678/.minikube}
	I0914 18:53:53.133808  530104 ubuntu.go:177] setting up certificates
	I0914 18:53:53.133817  530104 provision.go:83] configureAuth start
	I0914 18:53:53.133881  530104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-480282
	I0914 18:53:53.152154  530104 provision.go:138] copyHostCerts
	I0914 18:53:53.152195  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem
	I0914 18:53:53.152226  530104 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem, removing ...
	I0914 18:53:53.152236  530104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem
	I0914 18:53:53.152360  530104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/ca.pem (1082 bytes)
	I0914 18:53:53.152445  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem
	I0914 18:53:53.152467  530104 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem, removing ...
	I0914 18:53:53.152472  530104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem
	I0914 18:53:53.152501  530104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/cert.pem (1123 bytes)
	I0914 18:53:53.152543  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem
	I0914 18:53:53.152562  530104 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem, removing ...
	I0914 18:53:53.152566  530104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem
	I0914 18:53:53.152724  530104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-492678/.minikube/key.pem (1679 bytes)
	I0914 18:53:53.152784  530104 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-480282 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-480282]
	I0914 18:53:53.501454  530104 provision.go:172] copyRemoteCerts
	I0914 18:53:53.501522  530104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:53:53.501571  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:53.520116  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:53:53.619534  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 18:53:53.619649  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:53:53.648602  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 18:53:53.648663  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 18:53:53.676867  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 18:53:53.676949  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:53:53.705991  530104 provision.go:86] duration metric: configureAuth took 572.15995ms
	I0914 18:53:53.706018  530104 ubuntu.go:193] setting minikube options for container-runtime
	I0914 18:53:53.706221  530104 config.go:182] Loaded profile config "ingress-addon-legacy-480282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0914 18:53:53.706234  530104 machine.go:91] provisioned docker machine in 3.920329156s
	I0914 18:53:53.706240  530104 client.go:171] LocalClient.Create took 11.754901947s
	I0914 18:53:53.706253  530104 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-480282" took 11.754950127s
	I0914 18:53:53.706271  530104 start.go:300] post-start starting for "ingress-addon-legacy-480282" (driver="docker")
	I0914 18:53:53.706283  530104 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:53:53.706342  530104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:53:53.706388  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:53.724183  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:53:53.823657  530104 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:53:53.827897  530104 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 18:53:53.827934  530104 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 18:53:53.827945  530104 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 18:53:53.827953  530104 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 18:53:53.827968  530104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/addons for local assets ...
	I0914 18:53:53.828031  530104 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-492678/.minikube/files for local assets ...
	I0914 18:53:53.828121  530104 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem -> 4980292.pem in /etc/ssl/certs
	I0914 18:53:53.828133  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem -> /etc/ssl/certs/4980292.pem
	I0914 18:53:53.828241  530104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:53:53.838778  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem --> /etc/ssl/certs/4980292.pem (1708 bytes)
	I0914 18:53:53.867868  530104 start.go:303] post-start completed in 161.578536ms
	I0914 18:53:53.868256  530104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-480282
	I0914 18:53:53.886666  530104 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/config.json ...
	I0914 18:53:53.886960  530104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:53:53.887018  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:53.904759  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:53:53.999355  530104 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 18:53:54.010744  530104 start.go:128] duration metric: createHost completed in 12.062503656s
	I0914 18:53:54.010776  530104 start.go:83] releasing machines lock for "ingress-addon-legacy-480282", held for 12.062660644s
	I0914 18:53:54.010874  530104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-480282
	I0914 18:53:54.030861  530104 ssh_runner.go:195] Run: cat /version.json
	I0914 18:53:54.030938  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:54.030869  530104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:53:54.031086  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:53:54.051106  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:53:54.060727  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:53:54.145309  530104 ssh_runner.go:195] Run: systemctl --version
	I0914 18:53:54.283842  530104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 18:53:54.289907  530104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 18:53:54.321965  530104 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 18:53:54.322110  530104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:53:54.358585  530104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 18:53:54.358658  530104 start.go:469] detecting cgroup driver to use...
	I0914 18:53:54.358711  530104 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 18:53:54.358837  530104 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 18:53:54.373910  530104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 18:53:54.388203  530104 docker.go:196] disabling cri-docker service (if available) ...
	I0914 18:53:54.388295  530104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:53:54.404857  530104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:53:54.422208  530104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:53:54.512065  530104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:53:54.613008  530104 docker.go:212] disabling docker service ...
	I0914 18:53:54.613114  530104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:53:54.637282  530104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:53:54.651179  530104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:53:54.748355  530104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:53:54.848143  530104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:53:54.863479  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:53:54.884431  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0914 18:53:54.897080  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 18:53:54.909331  530104 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 18:53:54.909407  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 18:53:54.921339  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:53:54.933288  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 18:53:54.945621  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:53:54.957440  530104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:53:54.968544  530104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 18:53:54.980496  530104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:53:54.990721  530104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:53:55.003021  530104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:53:55.111361  530104 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 18:53:55.260816  530104 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 18:53:55.260944  530104 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 18:53:55.265961  530104 start.go:537] Will wait 60s for crictl version
	I0914 18:53:55.266039  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:53:55.270600  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:53:55.311993  530104 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.22
	RuntimeApiVersion:  v1
	I0914 18:53:55.312111  530104 ssh_runner.go:195] Run: containerd --version
	I0914 18:53:55.351737  530104 ssh_runner.go:195] Run: containerd --version
	I0914 18:53:55.388420  530104 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.22 ...
	I0914 18:53:55.390513  530104 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-480282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:53:55.407660  530104 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 18:53:55.412694  530104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:53:55.426509  530104 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0914 18:53:55.426584  530104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:53:55.466047  530104 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 18:53:55.466126  530104 ssh_runner.go:195] Run: which lz4
	I0914 18:53:55.470819  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0914 18:53:55.470959  530104 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 18:53:55.475285  530104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:53:55.475323  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0914 18:53:57.652920  530104 containerd.go:547] Took 2.181986 seconds to copy over tarball
	I0914 18:53:57.653014  530104 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:54:00.491229  530104 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838184651s)
	I0914 18:54:00.491254  530104 containerd.go:554] Took 2.838307 seconds to extract the tarball
	I0914 18:54:00.491263  530104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:54:00.573982  530104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:54:00.672540  530104 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 18:54:00.806515  530104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:54:00.852902  530104 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 18:54:00.852928  530104 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:54:00.852967  530104 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:54:00.853144  530104 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 18:54:00.853259  530104 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 18:54:00.853369  530104 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 18:54:00.853448  530104 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 18:54:00.853521  530104 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:54:00.853586  530104 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0914 18:54:00.853681  530104 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0914 18:54:00.854546  530104 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 18:54:00.854955  530104 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 18:54:00.855116  530104 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:54:00.855287  530104 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 18:54:00.855328  530104 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0914 18:54:00.855367  530104 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0914 18:54:00.855409  530104 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 18:54:00.855545  530104 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W0914 18:54:01.209194  530104 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.209375  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W0914 18:54:01.278377  530104 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.280930  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	I0914 18:54:01.308922  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W0914 18:54:01.312364  530104 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.312556  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W0914 18:54:01.330836  530104 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.331043  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W0914 18:54:01.350633  530104 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.350844  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W0914 18:54:01.365178  530104 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.365368  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	I0914 18:54:01.458495  530104 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0914 18:54:01.458557  530104 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0914 18:54:01.458624  530104 ssh_runner.go:195] Run: which crictl
	W0914 18:54:01.459329  530104 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 18:54:01.459467  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 18:54:02.043956  530104 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0914 18:54:02.044018  530104 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 18:54:02.044072  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.044140  530104 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0914 18:54:02.044160  530104 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:54:02.044181  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.044230  530104 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0914 18:54:02.044252  530104 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0914 18:54:02.044271  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.101401  530104 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0914 18:54:02.101452  530104 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 18:54:02.101499  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.101566  530104 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0914 18:54:02.101585  530104 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 18:54:02.101614  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.129040  530104 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0914 18:54:02.129097  530104 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 18:54:02.129153  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.129273  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0914 18:54:02.166087  530104 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 18:54:02.166149  530104 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:54:02.166270  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0914 18:54:02.166334  530104 ssh_runner.go:195] Run: which crictl
	I0914 18:54:02.166375  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:54:02.166351  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0914 18:54:02.166466  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0914 18:54:02.166515  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 18:54:02.225785  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0914 18:54:02.225877  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0914 18:54:02.335976  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0914 18:54:02.336081  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0914 18:54:02.336122  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0914 18:54:02.336143  530104 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:54:02.336245  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0914 18:54:02.336312  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0914 18:54:02.346717  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0914 18:54:02.398720  530104 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:54:02.398794  530104 cache_images.go:92] LoadImages completed in 1.545853332s
	W0914 18:54:02.398863  530104 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17217-492678/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0914 18:54:02.398915  530104 ssh_runner.go:195] Run: sudo crictl info
	I0914 18:54:02.439173  530104 cni.go:84] Creating CNI manager for ""
	I0914 18:54:02.439197  530104 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:54:02.439255  530104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 18:54:02.439282  530104 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-480282 NodeName:ingress-addon-legacy-480282 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:54:02.439427  530104 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-480282"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:54:02.439507  530104 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-480282 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480282 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 18:54:02.439579  530104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0914 18:54:02.450510  530104 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:54:02.450626  530104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:54:02.461332  530104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0914 18:54:02.483310  530104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0914 18:54:02.505570  530104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0914 18:54:02.526861  530104 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 18:54:02.531495  530104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:54:02.546318  530104 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282 for IP: 192.168.49.2
	I0914 18:54:02.546358  530104 certs.go:190] acquiring lock for shared ca certs: {Name:mka5985e85be7ad08b440e022e8dd6d327029a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:02.546526  530104 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key
	I0914 18:54:02.546583  530104 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key
	I0914 18:54:02.546646  530104 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.key
	I0914 18:54:02.546661  530104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt with IP's: []
	I0914 18:54:03.763273  530104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt ...
	I0914 18:54:03.763306  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: {Name:mka70e4dd9cb2ae8d6e3ea0b61549c68e6333558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:03.764035  530104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.key ...
	I0914 18:54:03.764053  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.key: {Name:mk432b8053d158f3a3999937c94d09ddb1ce7285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:03.764152  530104 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key.dd3b5fb2
	I0914 18:54:03.764169  530104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 18:54:04.289022  530104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt.dd3b5fb2 ...
	I0914 18:54:04.289056  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt.dd3b5fb2: {Name:mk631fda7739ae080e3a8b9d75a4ac8e5c3a0e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:04.289267  530104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key.dd3b5fb2 ...
	I0914 18:54:04.289282  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key.dd3b5fb2: {Name:mk89f26355e675eef00dc5afd4e4d286591a32e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:04.289371  530104 certs.go:337] copying /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt
	I0914 18:54:04.289448  530104 certs.go:341] copying /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key
	I0914 18:54:04.289509  530104 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.key
	I0914 18:54:04.289527  530104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.crt with IP's: []
	I0914 18:54:05.073682  530104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.crt ...
	I0914 18:54:05.073712  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.crt: {Name:mkcac4797ef97fe9fa4d804add4f28043f6c5e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:05.073908  530104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.key ...
	I0914 18:54:05.073921  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.key: {Name:mkb19b8dfba75ef75c0d92a8135b63b839efbed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:05.074653  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 18:54:05.074686  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 18:54:05.074700  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 18:54:05.074712  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 18:54:05.074727  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 18:54:05.074743  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 18:54:05.074756  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 18:54:05.074772  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 18:54:05.074835  530104 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem (1338 bytes)
	W0914 18:54:05.074882  530104 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029_empty.pem, impossibly tiny 0 bytes
	I0914 18:54:05.074894  530104 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:54:05.074922  530104 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:54:05.074954  530104 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:54:05.074981  530104 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/home/jenkins/minikube-integration/17217-492678/.minikube/certs/key.pem (1679 bytes)
	I0914 18:54:05.075033  530104 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem (1708 bytes)
	I0914 18:54:05.075070  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem -> /usr/share/ca-certificates/4980292.pem
	I0914 18:54:05.075088  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:54:05.075106  530104 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem -> /usr/share/ca-certificates/498029.pem
	I0914 18:54:05.075858  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 18:54:05.107621  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:54:05.137904  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:54:05.167682  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:54:05.197319  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:54:05.226020  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 18:54:05.256468  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:54:05.285721  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:54:05.314369  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/ssl/certs/4980292.pem --> /usr/share/ca-certificates/4980292.pem (1708 bytes)
	I0914 18:54:05.343400  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:54:05.372372  530104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-492678/.minikube/certs/498029.pem --> /usr/share/ca-certificates/498029.pem (1338 bytes)
	I0914 18:54:05.402548  530104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:54:05.424777  530104 ssh_runner.go:195] Run: openssl version
	I0914 18:54:05.432103  530104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4980292.pem && ln -fs /usr/share/ca-certificates/4980292.pem /etc/ssl/certs/4980292.pem"
	I0914 18:54:05.444119  530104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4980292.pem
	I0914 18:54:05.449107  530104 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:49 /usr/share/ca-certificates/4980292.pem
	I0914 18:54:05.449217  530104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4980292.pem
	I0914 18:54:05.458191  530104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4980292.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:54:05.470107  530104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:54:05.482074  530104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:54:05.486887  530104 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:54:05.486971  530104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:54:05.495735  530104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:54:05.507775  530104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/498029.pem && ln -fs /usr/share/ca-certificates/498029.pem /etc/ssl/certs/498029.pem"
	I0914 18:54:05.519726  530104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/498029.pem
	I0914 18:54:05.524342  530104 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:49 /usr/share/ca-certificates/498029.pem
	I0914 18:54:05.524409  530104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/498029.pem
	I0914 18:54:05.532972  530104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/498029.pem /etc/ssl/certs/51391683.0"
	I0914 18:54:05.544825  530104 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 18:54:05.549117  530104 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 18:54:05.549186  530104 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-480282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480282 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:54:05.549272  530104 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 18:54:05.549368  530104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:54:05.591552  530104 cri.go:89] found id: ""
	I0914 18:54:05.591627  530104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:54:05.603718  530104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:54:05.615177  530104 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 18:54:05.615242  530104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:54:05.626233  530104 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:54:05.626279  530104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 18:54:05.693780  530104 kubeadm.go:322] W0914 18:54:05.693116    1104 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0914 18:54:05.751773  530104 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 18:54:05.857299  530104 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:54:14.133614  530104 kubeadm.go:322] W0914 18:54:14.124771    1104 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 18:54:14.133746  530104 kubeadm.go:322] W0914 18:54:14.127391    1104 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 18:54:27.631304  530104 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0914 18:54:27.631366  530104 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 18:54:27.631451  530104 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 18:54:27.631506  530104 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 18:54:27.631544  530104 kubeadm.go:322] OS: Linux
	I0914 18:54:27.631589  530104 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 18:54:27.631639  530104 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 18:54:27.631685  530104 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 18:54:27.631733  530104 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 18:54:27.631783  530104 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 18:54:27.631831  530104 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 18:54:27.631902  530104 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:54:27.631993  530104 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:54:27.632083  530104 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:54:27.632188  530104 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:54:27.632272  530104 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:54:27.632311  530104 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 18:54:27.632377  530104 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:54:27.634540  530104 out.go:204]   - Generating certificates and keys ...
	I0914 18:54:27.634629  530104 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 18:54:27.634695  530104 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 18:54:27.634764  530104 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 18:54:27.634821  530104 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 18:54:27.634881  530104 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 18:54:27.634931  530104 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 18:54:27.634984  530104 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 18:54:27.635108  530104 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-480282 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 18:54:27.635173  530104 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 18:54:27.635296  530104 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-480282 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 18:54:27.635360  530104 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 18:54:27.635422  530104 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 18:54:27.635467  530104 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 18:54:27.635522  530104 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:54:27.635575  530104 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:54:27.635627  530104 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:54:27.635692  530104 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:54:27.635747  530104 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:54:27.635812  530104 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:54:27.638041  530104 out.go:204]   - Booting up control plane ...
	I0914 18:54:27.638151  530104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:54:27.638242  530104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:54:27.638319  530104 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:54:27.638411  530104 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:54:27.638572  530104 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:54:27.638651  530104 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.011000 seconds
	I0914 18:54:27.638784  530104 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:54:27.638915  530104 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:54:27.638987  530104 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:54:27.639115  530104 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-480282 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 18:54:27.639168  530104 kubeadm.go:322] [bootstrap-token] Using token: stawuf.6ysj9da6tm2bu5vz
	I0914 18:54:27.641176  530104 out.go:204]   - Configuring RBAC rules ...
	I0914 18:54:27.641279  530104 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:54:27.641364  530104 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:54:27.641500  530104 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:54:27.641641  530104 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:54:27.641761  530104 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:54:27.641854  530104 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:54:27.641975  530104 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:54:27.642038  530104 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 18:54:27.642086  530104 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 18:54:27.642094  530104 kubeadm.go:322] 
	I0914 18:54:27.642149  530104 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 18:54:27.642157  530104 kubeadm.go:322] 
	I0914 18:54:27.642229  530104 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 18:54:27.642237  530104 kubeadm.go:322] 
	I0914 18:54:27.642261  530104 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 18:54:27.642319  530104 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:54:27.642369  530104 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:54:27.642376  530104 kubeadm.go:322] 
	I0914 18:54:27.642424  530104 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 18:54:27.642497  530104 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:54:27.642564  530104 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:54:27.642573  530104 kubeadm.go:322] 
	I0914 18:54:27.642651  530104 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:54:27.642729  530104 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 18:54:27.642742  530104 kubeadm.go:322] 
	I0914 18:54:27.642833  530104 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token stawuf.6ysj9da6tm2bu5vz \
	I0914 18:54:27.642934  530104 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9891dba8af05d8d789a2289ec0f3d6b8812b95541089682ca62328aa5c24a5b6 \
	I0914 18:54:27.642959  530104 kubeadm.go:322]     --control-plane 
	I0914 18:54:27.642967  530104 kubeadm.go:322] 
	I0914 18:54:27.643046  530104 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:54:27.643053  530104 kubeadm.go:322] 
	I0914 18:54:27.643130  530104 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token stawuf.6ysj9da6tm2bu5vz \
	I0914 18:54:27.643239  530104 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9891dba8af05d8d789a2289ec0f3d6b8812b95541089682ca62328aa5c24a5b6 
	I0914 18:54:27.643251  530104 cni.go:84] Creating CNI manager for ""
	I0914 18:54:27.643259  530104 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:54:27.645821  530104 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 18:54:27.647957  530104 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 18:54:27.653058  530104 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0914 18:54:27.653085  530104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 18:54:27.675921  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 18:54:28.110943  530104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:54:28.111080  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:28.111168  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b minikube.k8s.io/name=ingress-addon-legacy-480282 minikube.k8s.io/updated_at=2023_09_14T18_54_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:28.263717  530104 ops.go:34] apiserver oom_adj: -16
	I0914 18:54:28.263807  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:28.363268  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:28.961167  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:29.460568  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:29.960575  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:30.460834  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:30.961371  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:31.460528  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:31.961403  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:32.461289  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:32.960628  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:33.460640  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:33.960610  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:34.460687  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:34.961273  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:35.461236  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:35.961346  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:36.461058  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:36.961001  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:37.460713  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:37.960567  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:38.461052  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:38.961499  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:39.461338  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:39.961416  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:40.460547  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:40.960567  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:41.460712  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:41.960575  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:42.461030  530104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:54:42.567962  530104 kubeadm.go:1081] duration metric: took 14.456932594s to wait for elevateKubeSystemPrivileges.
	I0914 18:54:42.567996  530104 kubeadm.go:406] StartCluster complete in 37.018816475s
	I0914 18:54:42.568013  530104 settings.go:142] acquiring lock: {Name:mkfaf0f329c2736368d7fc21433e53e0c9a5b1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:42.568087  530104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:54:42.568835  530104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/kubeconfig: {Name:mk6a8e8b5c770de881617bb4e8ebf560fd4b6800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:54:42.569669  530104 kapi.go:59] client config for ingress-addon-legacy-480282: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.key", CAFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 18:54:42.571020  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 18:54:42.571450  530104 config.go:182] Loaded profile config "ingress-addon-legacy-480282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0914 18:54:42.571496  530104 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 18:54:42.571556  530104 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-480282"
	I0914 18:54:42.571568  530104 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 18:54:42.571575  530104 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-480282"
	I0914 18:54:42.571585  530104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-480282"
	I0914 18:54:42.571571  530104 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-480282"
	I0914 18:54:42.571653  530104 host.go:66] Checking if "ingress-addon-legacy-480282" exists ...
	I0914 18:54:42.571897  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Status}}
	I0914 18:54:42.572065  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Status}}
	I0914 18:54:42.614748  530104 kapi.go:59] client config for ingress-addon-legacy-480282: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.key", CAFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 18:54:42.620783  530104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:54:42.622597  530104 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:54:42.622620  530104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:54:42.622690  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:54:42.634206  530104 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-480282" context rescaled to 1 replicas
	I0914 18:54:42.634246  530104 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:54:42.637268  530104 out.go:177] * Verifying Kubernetes components...
	I0914 18:54:42.640606  530104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:54:42.636718  530104 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-480282"
	I0914 18:54:42.640822  530104 host.go:66] Checking if "ingress-addon-legacy-480282" exists ...
	I0914 18:54:42.641382  530104 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480282 --format={{.State.Status}}
	I0914 18:54:42.652401  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:54:42.677190  530104 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:54:42.677214  530104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:54:42.677278  530104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480282
	I0914 18:54:42.725331  530104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33412 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/ingress-addon-legacy-480282/id_rsa Username:docker}
	I0914 18:54:42.840516  530104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 18:54:42.841189  530104 kapi.go:59] client config for ingress-addon-legacy-480282: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.key", CAFile:"/home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 18:54:42.841527  530104 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-480282" to be "Ready" ...
	I0914 18:54:42.845368  530104 node_ready.go:49] node "ingress-addon-legacy-480282" has status "Ready":"True"
	I0914 18:54:42.845393  530104 node_ready.go:38] duration metric: took 3.829053ms waiting for node "ingress-addon-legacy-480282" to be "Ready" ...
	I0914 18:54:42.845404  530104 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:54:42.854790  530104 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace to be "Ready" ...
	I0914 18:54:42.894255  530104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:54:42.899497  530104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:54:43.245989  530104 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 18:54:43.354104  530104 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 18:54:43.356096  530104 addons.go:502] enable addons completed in 784.592839ms: enabled=[default-storageclass storage-provisioner]
	I0914 18:54:44.882248  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:54:47.378835  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:54:49.880174  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:54:52.380533  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:54:54.882297  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:54:57.378503  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:54:59.378982  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:55:01.379561  530104 pod_ready.go:102] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:55:02.878662  530104 pod_ready.go:92] pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:55:02.878691  530104 pod_ready.go:81] duration metric: took 20.023817982s waiting for pod "coredns-66bff467f8-kw8p5" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.878705  530104 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.885610  530104 pod_ready.go:92] pod "etcd-ingress-addon-legacy-480282" in "kube-system" namespace has status "Ready":"True"
	I0914 18:55:02.885636  530104 pod_ready.go:81] duration metric: took 6.922268ms waiting for pod "etcd-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.885650  530104 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.890564  530104 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-480282" in "kube-system" namespace has status "Ready":"True"
	I0914 18:55:02.890591  530104 pod_ready.go:81] duration metric: took 4.933005ms waiting for pod "kube-apiserver-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.890603  530104 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.895847  530104 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-480282" in "kube-system" namespace has status "Ready":"True"
	I0914 18:55:02.895876  530104 pod_ready.go:81] duration metric: took 5.264723ms waiting for pod "kube-controller-manager-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.895889  530104 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rr4k2" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.901279  530104 pod_ready.go:92] pod "kube-proxy-rr4k2" in "kube-system" namespace has status "Ready":"True"
	I0914 18:55:02.901307  530104 pod_ready.go:81] duration metric: took 5.411636ms waiting for pod "kube-proxy-rr4k2" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:02.901318  530104 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:03.073621  530104 request.go:629] Waited for 172.233708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-480282
	I0914 18:55:03.274436  530104 request.go:629] Waited for 197.355252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-480282
	I0914 18:55:03.277242  530104 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-480282" in "kube-system" namespace has status "Ready":"True"
	I0914 18:55:03.277266  530104 pod_ready.go:81] duration metric: took 375.940191ms waiting for pod "kube-scheduler-ingress-addon-legacy-480282" in "kube-system" namespace to be "Ready" ...
	I0914 18:55:03.277277  530104 pod_ready.go:38] duration metric: took 20.431863352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:55:03.277293  530104 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:55:03.277364  530104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:55:03.291955  530104 api_server.go:72] duration metric: took 20.657675928s to wait for apiserver process to appear ...
	I0914 18:55:03.291994  530104 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:55:03.292014  530104 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 18:55:03.300972  530104 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 18:55:03.301942  530104 api_server.go:141] control plane version: v1.18.20
	I0914 18:55:03.301973  530104 api_server.go:131] duration metric: took 9.966942ms to wait for apiserver health ...
	I0914 18:55:03.301982  530104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:55:03.474369  530104 request.go:629] Waited for 172.318935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 18:55:03.480812  530104 system_pods.go:59] 8 kube-system pods found
	I0914 18:55:03.480847  530104 system_pods.go:61] "coredns-66bff467f8-kw8p5" [7343d661-a6de-4c0c-994d-863480f09832] Running
	I0914 18:55:03.480876  530104 system_pods.go:61] "etcd-ingress-addon-legacy-480282" [78bf522e-75ce-49c1-be68-e8ff7a6f77ad] Running
	I0914 18:55:03.480884  530104 system_pods.go:61] "kindnet-x6mvc" [437e490a-f27a-4504-bd95-90c15172cd37] Running
	I0914 18:55:03.480895  530104 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-480282" [69e307f8-6076-4d9d-b1d5-94d968d3322e] Running
	I0914 18:55:03.480908  530104 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-480282" [09bb50ca-93a3-4b27-aeae-d2f7814148da] Running
	I0914 18:55:03.480917  530104 system_pods.go:61] "kube-proxy-rr4k2" [d8d5e9c9-6619-4bfb-abb5-0ce945f02e91] Running
	I0914 18:55:03.480922  530104 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-480282" [0049eaee-5043-4c04-bff7-af3ddbf20aa6] Running
	I0914 18:55:03.480926  530104 system_pods.go:61] "storage-provisioner" [2baa7f34-fc5d-4722-9c2f-6ce13317e782] Running
	I0914 18:55:03.480938  530104 system_pods.go:74] duration metric: took 178.948658ms to wait for pod list to return data ...
	I0914 18:55:03.480964  530104 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:55:03.674441  530104 request.go:629] Waited for 193.388336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0914 18:55:03.676915  530104 default_sa.go:45] found service account: "default"
	I0914 18:55:03.676944  530104 default_sa.go:55] duration metric: took 195.972875ms for default service account to be created ...
	I0914 18:55:03.676954  530104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:55:03.874496  530104 request.go:629] Waited for 197.435515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 18:55:03.880399  530104 system_pods.go:86] 8 kube-system pods found
	I0914 18:55:03.880431  530104 system_pods.go:89] "coredns-66bff467f8-kw8p5" [7343d661-a6de-4c0c-994d-863480f09832] Running
	I0914 18:55:03.880444  530104 system_pods.go:89] "etcd-ingress-addon-legacy-480282" [78bf522e-75ce-49c1-be68-e8ff7a6f77ad] Running
	I0914 18:55:03.880449  530104 system_pods.go:89] "kindnet-x6mvc" [437e490a-f27a-4504-bd95-90c15172cd37] Running
	I0914 18:55:03.880454  530104 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-480282" [69e307f8-6076-4d9d-b1d5-94d968d3322e] Running
	I0914 18:55:03.880460  530104 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-480282" [09bb50ca-93a3-4b27-aeae-d2f7814148da] Running
	I0914 18:55:03.880468  530104 system_pods.go:89] "kube-proxy-rr4k2" [d8d5e9c9-6619-4bfb-abb5-0ce945f02e91] Running
	I0914 18:55:03.880481  530104 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-480282" [0049eaee-5043-4c04-bff7-af3ddbf20aa6] Running
	I0914 18:55:03.880490  530104 system_pods.go:89] "storage-provisioner" [2baa7f34-fc5d-4722-9c2f-6ce13317e782] Running
	I0914 18:55:03.880497  530104 system_pods.go:126] duration metric: took 203.537375ms to wait for k8s-apps to be running ...
	I0914 18:55:03.880510  530104 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:55:03.880573  530104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:55:03.894471  530104 system_svc.go:56] duration metric: took 13.948791ms WaitForService to wait for kubelet.
	I0914 18:55:03.894497  530104 kubeadm.go:581] duration metric: took 21.260226519s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 18:55:03.894516  530104 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:55:04.073764  530104 request.go:629] Waited for 179.175471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0914 18:55:04.076843  530104 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 18:55:04.076887  530104 node_conditions.go:123] node cpu capacity is 2
	I0914 18:55:04.076900  530104 node_conditions.go:105] duration metric: took 182.378546ms to run NodePressure ...
	I0914 18:55:04.076912  530104 start.go:228] waiting for startup goroutines ...
	I0914 18:55:04.076919  530104 start.go:233] waiting for cluster config update ...
	I0914 18:55:04.076929  530104 start.go:242] writing updated cluster config ...
	I0914 18:55:04.077236  530104 ssh_runner.go:195] Run: rm -f paused
	I0914 18:55:04.137050  530104 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0914 18:55:04.139495  530104 out.go:177] 
	W0914 18:55:04.141412  530104 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0914 18:55:04.143484  530104 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0914 18:55:04.145676  530104 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-480282" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c503e2ef61deb       a39a074194753       8 seconds ago        Exited              hello-world-app           2                   a30729ed9dc60       hello-world-app-5f5d8b66bb-x55d5
	192b5ec206f47       fa0c6bb795403       34 seconds ago       Running             nginx                     0                   a4e991fc1b660       nginx
	b76d1d81914e0       d7f0cba3aa5bf       56 seconds ago       Exited              controller                0                   b04dc337909cb       ingress-nginx-controller-7fcf777cb7-v27rj
	6002464f66d78       a883f7fc35610       About a minute ago   Exited              patch                     0                   e6edc0121f8cb       ingress-nginx-admission-patch-hzqjk
	b45c0aa765447       a883f7fc35610       About a minute ago   Exited              create                    0                   6074d2647a2af       ingress-nginx-admission-create-pmknr
	b07e724af0f46       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   11d7f0d1f37cb       coredns-66bff467f8-kw8p5
	8e269b130cebd       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   14662114d5417       storage-provisioner
	cc6b041ec5624       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   386c66b5d7f9b       kindnet-x6mvc
	d8c1914037598       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   3dfb77498e60c       kube-proxy-rr4k2
	e3d3dbde6768b       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   c83a5108c3a4a       etcd-ingress-addon-legacy-480282
	920ae32fa170d       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   fe7d3279bb376       kube-controller-manager-ingress-addon-legacy-480282
	d4bc13c473d27       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   83ce1e858c5b1       kube-apiserver-ingress-addon-legacy-480282
	0562b2d9cb363       095f37015706d       About a minute ago   Running             kube-scheduler            0                   8717a89c4587b       kube-scheduler-ingress-addon-legacy-480282
	
	* 
	* ==> containerd <==
	* Sep 14 18:56:01 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:01.157970724Z" level=info msg="RemoveContainer for \"5ff269a7d4d237561bdd0fe24c90c72a00b6e313c4a62a492db20ce130302b08\" returns successfully"
	Sep 14 18:56:01 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:01.886857312Z" level=info msg="StopContainer for \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" with timeout 2 (s)"
	Sep 14 18:56:01 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:01.887304665Z" level=info msg="Stop container \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" with signal terminated"
	Sep 14 18:56:01 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:01.887798647Z" level=info msg="StopContainer for \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" with timeout 2 (s)"
	Sep 14 18:56:01 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:01.921026790Z" level=info msg="Skipping the sending of signal terminated to container \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" because a prior stop with timeout>0 request already sent the signal"
	Sep 14 18:56:03 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:03.922043776Z" level=info msg="Kill container \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\""
	Sep 14 18:56:03 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:03.922054270Z" level=info msg="Kill container \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\""
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.015902562Z" level=info msg="shim disconnected" id=b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.015977959Z" level=warning msg="cleaning up after shim disconnected" id=b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65 namespace=k8s.io
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.015989446Z" level=info msg="cleaning up dead shim"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.027790271Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:56:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4585 runtime=io.containerd.runc.v2\n"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.031515290Z" level=info msg="StopContainer for \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" returns successfully"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.031675545Z" level=info msg="StopContainer for \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" returns successfully"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.032320625Z" level=info msg="StopPodSandbox for \"b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b\""
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.032408338Z" level=info msg="Container to stop \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.032667333Z" level=info msg="StopPodSandbox for \"b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b\""
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.032720576Z" level=info msg="Container to stop \"b76d1d81914e011d568d3302ca598bc5861c6ae1b2c332f1ecb583c5e3346c65\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.075361404Z" level=info msg="shim disconnected" id=b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.075578036Z" level=warning msg="cleaning up after shim disconnected" id=b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b namespace=k8s.io
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.075589360Z" level=info msg="cleaning up dead shim"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.087867190Z" level=warning msg="cleanup warnings time=\"2023-09-14T18:56:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4625 runtime=io.containerd.runc.v2\n"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.144483361Z" level=info msg="TearDown network for sandbox \"b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b\" successfully"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.144539090Z" level=info msg="StopPodSandbox for \"b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b\" returns successfully"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.149409342Z" level=info msg="TearDown network for sandbox \"b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b\" successfully"
	Sep 14 18:56:04 ingress-addon-legacy-480282 containerd[823]: time="2023-09-14T18:56:04.149462282Z" level=info msg="StopPodSandbox for \"b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b\" returns successfully"
	
	* 
	* ==> coredns [b07e724af0f462f415105d88138b7135d4bc6b2215b8c2d9df040c6d0ff07bd6] <==
	* [INFO] 10.244.0.5:55313 - 60025 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006176s
	[INFO] 10.244.0.5:37112 - 23323 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003034516s
	[INFO] 10.244.0.5:37112 - 64909 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002046797s
	[INFO] 10.244.0.5:55313 - 42029 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001449586s
	[INFO] 10.244.0.5:37112 - 20567 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000220752s
	[INFO] 10.244.0.5:55313 - 14495 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001412909s
	[INFO] 10.244.0.5:55313 - 34307 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000093268s
	[INFO] 10.244.0.5:50357 - 61020 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091389s
	[INFO] 10.244.0.5:50357 - 3934 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085801s
	[INFO] 10.244.0.5:50357 - 28827 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048083s
	[INFO] 10.244.0.5:50357 - 29284 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068808s
	[INFO] 10.244.0.5:37830 - 57893 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074478s
	[INFO] 10.244.0.5:50357 - 22410 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000179692s
	[INFO] 10.244.0.5:37830 - 26612 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000168173s
	[INFO] 10.244.0.5:37830 - 32937 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082044s
	[INFO] 10.244.0.5:37830 - 33021 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071401s
	[INFO] 10.244.0.5:37830 - 17689 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074453s
	[INFO] 10.244.0.5:37830 - 46961 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.001180638s
	[INFO] 10.244.0.5:50357 - 52529 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000919493s
	[INFO] 10.244.0.5:37830 - 43278 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003204198s
	[INFO] 10.244.0.5:37830 - 28211 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000999378s
	[INFO] 10.244.0.5:37830 - 23047 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060693s
	[INFO] 10.244.0.5:50357 - 52064 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002230445s
	[INFO] 10.244.0.5:50357 - 52206 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001130718s
	[INFO] 10.244.0.5:50357 - 63884 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066527s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-480282
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-480282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b
	                    minikube.k8s.io/name=ingress-addon-legacy-480282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T18_54_28_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 18:54:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-480282
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 18:56:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 18:56:00 +0000   Thu, 14 Sep 2023 18:54:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 18:56:00 +0000   Thu, 14 Sep 2023 18:54:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 18:56:00 +0000   Thu, 14 Sep 2023 18:54:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 18:56:00 +0000   Thu, 14 Sep 2023 18:54:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-480282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cd8515e389c4891a700c0e868626043
	  System UUID:                8e0aaf6c-8f88-4335-92d8-98f59f475908
	  Boot ID:                    5482c722-bf9c-42ea-8010-6373e20f2ddd
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.22
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-x55d5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-66bff467f8-kw8p5                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-480282                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-x6mvc                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-480282             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-480282    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-rr4k2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-480282             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 99s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s   kubelet     Node ingress-addon-legacy-480282 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s   kubelet     Node ingress-addon-legacy-480282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s   kubelet     Node ingress-addon-legacy-480282 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s   kubelet     Node ingress-addon-legacy-480282 status is now: NodeReady
	  Normal  Starting                 85s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000716] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000b46c2cc5
	[  +0.001125] FS-Cache: N-key=[8] '3a3c5c0100000000'
	[  +0.003134] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001025] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000005b9d8b56
	[  +0.001051] FS-Cache: O-key=[8] '3a3c5c0100000000'
	[  +0.000706] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=000000007cc3b60a
	[  +0.001054] FS-Cache: N-key=[8] '3a3c5c0100000000'
	[Sep14 18:53] FS-Cache: Duplicate cookie detected
	[  +0.000926] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001199] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=000000000f3bd533
	[  +0.001259] FS-Cache: O-key=[8] '393c5c0100000000'
	[  +0.000720] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=00000000b46c2cc5
	[  +0.001109] FS-Cache: N-key=[8] '393c5c0100000000'
	[  +0.453617] FS-Cache: Duplicate cookie detected
	[  +0.000772] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000d4bffb2c{9p.inode} n=00000000f298b64d
	[  +0.001134] FS-Cache: O-key=[8] '3f3c5c0100000000'
	[  +0.000719] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000934] FS-Cache: N-cookie d=00000000d4bffb2c{9p.inode} n=000000000e997ae2
	[  +0.001077] FS-Cache: N-key=[8] '3f3c5c0100000000'
	[ +27.251490] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [e3d3dbde6768bedc1a61538b5aaf5e6ee57ffcb45c63e88a9b24768eea65ca3f] <==
	* raft2023/09/14 18:54:19 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/14 18:54:19 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/14 18:54:19 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/14 18:54:19 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-14 18:54:19.406322 W | auth: simple token is not cryptographically signed
	2023-09-14 18:54:19.415504 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-14 18:54:19.417828 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-14 18:54:19.418194 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 18:54:19.418524 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-14 18:54:19.419069 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/09/14 18:54:19 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-14 18:54:19.419652 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/09/14 18:54:20 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/14 18:54:20 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/14 18:54:20 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/14 18:54:20 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/14 18:54:20 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-14 18:54:20.397565 I | embed: ready to serve client requests
	2023-09-14 18:54:20.399161 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 18:54:20.399510 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-14 18:54:20.404634 I | etcdserver: published {Name:ingress-addon-legacy-480282 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-14 18:54:20.404940 I | embed: ready to serve client requests
	2023-09-14 18:54:20.406754 I | embed: serving client requests on 192.168.49.2:2379
	2023-09-14 18:54:20.436683 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-14 18:54:20.436774 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  18:56:10 up  4:38,  0 users,  load average: 0.61, 1.29, 1.25
	Linux ingress-addon-legacy-480282 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [cc6b041ec56245773492268f314aa8ab928178c51395e078a27fe8cae3b59f36] <==
	* I0914 18:54:45.532785       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 18:54:45.532856       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0914 18:54:45.532976       1 main.go:116] setting mtu 1500 for CNI 
	I0914 18:54:45.533027       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 18:54:45.533047       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 18:54:45.932898       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:54:46.025221       1 main.go:227] handling current node
	I0914 18:54:56.036409       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:54:56.036442       1 main.go:227] handling current node
	I0914 18:55:06.043810       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:55:06.043842       1 main.go:227] handling current node
	I0914 18:55:16.054391       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:55:16.054422       1 main.go:227] handling current node
	I0914 18:55:26.065435       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:55:26.065462       1 main.go:227] handling current node
	I0914 18:55:36.074377       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:55:36.074406       1 main.go:227] handling current node
	I0914 18:55:46.083941       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:55:46.083976       1 main.go:227] handling current node
	I0914 18:55:56.093558       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:55:56.093588       1 main.go:227] handling current node
	I0914 18:56:06.106120       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 18:56:06.106152       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d4bc13c473d2704f00e482c7641f5ce1c7d7efd0a3632c711e16a7fe5abb2303] <==
	* E0914 18:54:24.338133       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0914 18:54:24.582127       1 cache.go:39] Caches are synced for autoregister controller
	I0914 18:54:24.583104       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 18:54:24.583131       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0914 18:54:24.583819       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 18:54:24.586330       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0914 18:54:25.278668       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0914 18:54:25.278966       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 18:54:25.288757       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0914 18:54:25.292603       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0914 18:54:25.292624       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0914 18:54:25.717330       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 18:54:25.761523       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0914 18:54:25.838095       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0914 18:54:25.839388       1 controller.go:609] quota admission added evaluator for: endpoints
	I0914 18:54:25.843214       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 18:54:26.739289       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0914 18:54:27.460317       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0914 18:54:27.605252       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0914 18:54:30.824274       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 18:54:42.234504       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0914 18:54:42.247008       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0914 18:55:05.094788       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0914 18:55:32.927412       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0914 18:56:01.899797       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [920ae32fa170dd206d2512a39797a19d52b812bcbb04d4fb32536068298fcb39] <==
	* W0914 18:54:42.542110       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-480282. Assuming now as a timestamp.
	I0914 18:54:42.542143       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0914 18:54:42.542374       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0914 18:54:42.542668       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-480282", UID:"aec88080-10ac-4284-b384-490a8fc36bde", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-480282 event: Registered Node ingress-addon-legacy-480282 in Controller
	I0914 18:54:42.616825       1 shared_informer.go:230] Caches are synced for attach detach 
	I0914 18:54:42.627597       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f8e3eb4c-ea47-4421-8ec8-8cf74038e572", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0914 18:54:42.690409       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0914 18:54:42.695626       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 18:54:42.695796       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 18:54:42.716220       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 18:54:42.717794       1 shared_informer.go:230] Caches are synced for disruption 
	I0914 18:54:42.717819       1 disruption.go:339] Sending events to api server.
	I0914 18:54:42.739690       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0914 18:54:42.777317       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"e9d7890d-e567-4bcc-afb0-7cbc0c14a431", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-vzcnf
	I0914 18:54:42.787878       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 18:54:42.790261       1 shared_informer.go:230] Caches are synced for stateful set 
	I0914 18:54:42.791096       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 18:55:05.026508       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"692c1c29-21d1-4fee-b258-78f8231c1544", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0914 18:55:05.034368       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"646b2fd6-c00c-4468-ab64-4c54645d8340", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-v27rj
	I0914 18:55:05.132445       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"47aa5888-80a0-4abf-8731-5d5f976fafa1", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-pmknr
	I0914 18:55:05.178204       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6646db53-e771-4a18-bbde-3390b6a539ba", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hzqjk
	I0914 18:55:07.970953       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6646db53-e771-4a18-bbde-3390b6a539ba", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0914 18:55:07.994372       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"47aa5888-80a0-4abf-8731-5d5f976fafa1", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0914 18:55:42.706738       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ae660909-0d0f-40f0-9419-ca448e4b9976", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0914 18:55:42.734497       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"79589f10-fcc3-438c-b3f6-f4ca50a099c0", APIVersion:"apps/v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-x55d5
	
	* 
	* ==> kube-proxy [d8c1914037598478918444397fbf3e3c127b4c3b0265e00e3fa33766bdc1691f] <==
	* W0914 18:54:44.725171       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0914 18:54:44.738566       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0914 18:54:44.738668       1 server_others.go:186] Using iptables Proxier.
	I0914 18:54:44.739005       1 server.go:583] Version: v1.18.20
	I0914 18:54:44.742495       1 config.go:315] Starting service config controller
	I0914 18:54:44.742651       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0914 18:54:44.743070       1 config.go:133] Starting endpoints config controller
	I0914 18:54:44.743153       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0914 18:54:44.843192       1 shared_informer.go:230] Caches are synced for service config 
	I0914 18:54:44.843323       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [0562b2d9cb3633522fe81b1d710709a302b667f50dcbfd24a38b61cc51567c71] <==
	* W0914 18:54:24.426902       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 18:54:24.496286       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 18:54:24.496484       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 18:54:24.498572       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0914 18:54:24.498987       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:54:24.499086       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:54:24.499183       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 18:54:24.522193       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:54:24.523088       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:54:24.523322       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 18:54:24.523435       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:54:24.525931       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:54:24.526132       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:54:24.526263       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:54:24.526536       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:54:24.526798       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:54:24.532756       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:54:24.532939       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 18:54:24.533036       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:54:25.400990       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:54:25.473423       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:54:25.547372       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0914 18:54:28.799337       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0914 18:54:42.324124       1 factory.go:503] pod: kube-system/coredns-66bff467f8-kw8p5 is already present in the active queue
	E0914 18:54:42.372331       1 factory.go:503] pod: kube-system/coredns-66bff467f8-vzcnf is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Sep 14 18:55:48 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:48.111036    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7de05414ea0507416c2578b42f5c2aad272ff7d99e2da83f798efab3bfa2dea8
	Sep 14 18:55:48 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:48.111430    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5ff269a7d4d237561bdd0fe24c90c72a00b6e313c4a62a492db20ce130302b08
	Sep 14 18:55:48 ingress-addon-legacy-480282 kubelet[1665]: E0914 18:55:48.111703    1665 pod_workers.go:191] Error syncing pod ed7ad6ff-688e-417c-8571-a345bca0c433 ("hello-world-app-5f5d8b66bb-x55d5_default(ed7ad6ff-688e-417c-8571-a345bca0c433)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x55d5_default(ed7ad6ff-688e-417c-8571-a345bca0c433)"
	Sep 14 18:55:49 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:49.114957    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5ff269a7d4d237561bdd0fe24c90c72a00b6e313c4a62a492db20ce130302b08
	Sep 14 18:55:49 ingress-addon-legacy-480282 kubelet[1665]: E0914 18:55:49.115208    1665 pod_workers.go:191] Error syncing pod ed7ad6ff-688e-417c-8571-a345bca0c433 ("hello-world-app-5f5d8b66bb-x55d5_default(ed7ad6ff-688e-417c-8571-a345bca0c433)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x55d5_default(ed7ad6ff-688e-417c-8571-a345bca0c433)"
	Sep 14 18:55:50 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:50.848321    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5fe82c9b4b678147b0c021584c334ca2ef6ad9df2d5dfed4fdddea825e2722e9
	Sep 14 18:55:50 ingress-addon-legacy-480282 kubelet[1665]: E0914 18:55:50.848762    1665 pod_workers.go:191] Error syncing pod b41c997f-5939-4b79-8062-9c0af39bce6d ("kube-ingress-dns-minikube_kube-system(b41c997f-5939-4b79-8062-9c0af39bce6d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b41c997f-5939-4b79-8062-9c0af39bce6d)"
	Sep 14 18:55:58 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:58.721698    1665 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-7w2vv" (UniqueName: "kubernetes.io/secret/b41c997f-5939-4b79-8062-9c0af39bce6d-minikube-ingress-dns-token-7w2vv") pod "b41c997f-5939-4b79-8062-9c0af39bce6d" (UID: "b41c997f-5939-4b79-8062-9c0af39bce6d")
	Sep 14 18:55:58 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:58.725997    1665 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41c997f-5939-4b79-8062-9c0af39bce6d-minikube-ingress-dns-token-7w2vv" (OuterVolumeSpecName: "minikube-ingress-dns-token-7w2vv") pod "b41c997f-5939-4b79-8062-9c0af39bce6d" (UID: "b41c997f-5939-4b79-8062-9c0af39bce6d"). InnerVolumeSpecName "minikube-ingress-dns-token-7w2vv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 18:55:58 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:58.821997    1665 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-7w2vv" (UniqueName: "kubernetes.io/secret/b41c997f-5939-4b79-8062-9c0af39bce6d-minikube-ingress-dns-token-7w2vv") on node "ingress-addon-legacy-480282" DevicePath ""
	Sep 14 18:55:59 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:55:59.134802    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5fe82c9b4b678147b0c021584c334ca2ef6ad9df2d5dfed4fdddea825e2722e9
	Sep 14 18:56:00 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:00.849287    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5ff269a7d4d237561bdd0fe24c90c72a00b6e313c4a62a492db20ce130302b08
	Sep 14 18:56:01 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:01.142496    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5ff269a7d4d237561bdd0fe24c90c72a00b6e313c4a62a492db20ce130302b08
	Sep 14 18:56:01 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:01.142841    1665 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c503e2ef61debc6c4f19270a54eb7fe774b7f5f5a8df54de17baac20b827ed43
	Sep 14 18:56:01 ingress-addon-legacy-480282 kubelet[1665]: E0914 18:56:01.143092    1665 pod_workers.go:191] Error syncing pod ed7ad6ff-688e-417c-8571-a345bca0c433 ("hello-world-app-5f5d8b66bb-x55d5_default(ed7ad6ff-688e-417c-8571-a345bca0c433)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x55d5_default(ed7ad6ff-688e-417c-8571-a345bca0c433)"
	Sep 14 18:56:01 ingress-addon-legacy-480282 kubelet[1665]: E0914 18:56:01.900118    1665 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-v27rj.1784d8d34f9fda05", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-v27rj", UID:"af852644-3190-4114-ba57-88034f1380b3", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-480282"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138f3b074ac9005, ext:94476264185, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138f3b074ac9005, ext:94476264185, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-v27rj.1784d8d34f9fda05" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 14 18:56:01 ingress-addon-legacy-480282 kubelet[1665]: E0914 18:56:01.913672    1665 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-v27rj.1784d8d34f9fda05", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-v27rj", UID:"af852644-3190-4114-ba57-88034f1380b3", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-480282"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138f3b074ac9005, ext:94476264185, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138f3b074953ccb, ext:94474735543, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-v27rj.1784d8d34f9fda05" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 14 18:56:04 ingress-addon-legacy-480282 kubelet[1665]: W0914 18:56:04.152075    1665 pod_container_deletor.go:77] Container "b04dc337909cbda5090d8a3b478e5a4e996560de9f887b849ec4d49fe1cc406b" not found in pod's containers
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:06.092150    1665 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wcbcs" (UniqueName: "kubernetes.io/secret/af852644-3190-4114-ba57-88034f1380b3-ingress-nginx-token-wcbcs") pod "af852644-3190-4114-ba57-88034f1380b3" (UID: "af852644-3190-4114-ba57-88034f1380b3")
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:06.092223    1665 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/af852644-3190-4114-ba57-88034f1380b3-webhook-cert") pod "af852644-3190-4114-ba57-88034f1380b3" (UID: "af852644-3190-4114-ba57-88034f1380b3")
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:06.099635    1665 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af852644-3190-4114-ba57-88034f1380b3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "af852644-3190-4114-ba57-88034f1380b3" (UID: "af852644-3190-4114-ba57-88034f1380b3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:06.103760    1665 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af852644-3190-4114-ba57-88034f1380b3-ingress-nginx-token-wcbcs" (OuterVolumeSpecName: "ingress-nginx-token-wcbcs") pod "af852644-3190-4114-ba57-88034f1380b3" (UID: "af852644-3190-4114-ba57-88034f1380b3"). InnerVolumeSpecName "ingress-nginx-token-wcbcs". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:06.192548    1665 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/af852644-3190-4114-ba57-88034f1380b3-webhook-cert") on node "ingress-addon-legacy-480282" DevicePath ""
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: I0914 18:56:06.192625    1665 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wcbcs" (UniqueName: "kubernetes.io/secret/af852644-3190-4114-ba57-88034f1380b3-ingress-nginx-token-wcbcs") on node "ingress-addon-legacy-480282" DevicePath ""
	Sep 14 18:56:06 ingress-addon-legacy-480282 kubelet[1665]: W0914 18:56:06.853020    1665 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/af852644-3190-4114-ba57-88034f1380b3/volumes" does not exist
	
	* 
	* ==> storage-provisioner [8e269b130cebdde111cfdfd68c1af892cfe06435df613506eedc965a7388114e] <==
	* I0914 18:54:46.769702       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:54:46.781808       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:54:46.782112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:54:46.789491       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:54:46.789871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-480282_caf910d5-28e1-4b42-b34b-af807c13ebf5!
	I0914 18:54:46.791351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a5b2a94-f7b3-4f7b-945c-a592ed4b50eb", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-480282_caf910d5-28e1-4b42-b34b-af807c13ebf5 became leader
	I0914 18:54:46.891033       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-480282_caf910d5-28e1-4b42-b34b-af807c13ebf5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-480282 -n ingress-addon-legacy-480282
E0914 18:56:10.722855  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-480282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.99s)

                                                
                                    

Test pass (261/303)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.35
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.1/json-events 15.76
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.61
22 TestAddons/Setup 136.63
24 TestAddons/parallel/Registry 16.13
26 TestAddons/parallel/InspektorGadget 11.13
27 TestAddons/parallel/MetricsServer 6.17
30 TestAddons/parallel/CSI 54.61
31 TestAddons/parallel/Headlamp 11.69
32 TestAddons/parallel/CloudSpanner 5.75
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.36
37 TestCertOptions 37.25
38 TestCertExpiration 228.84
40 TestForceSystemdFlag 44.4
41 TestForceSystemdEnv 43.68
42 TestDockerEnvContainerd 50.99
47 TestErrorSpam/setup 31.83
48 TestErrorSpam/start 0.85
49 TestErrorSpam/status 1.1
50 TestErrorSpam/pause 1.91
51 TestErrorSpam/unpause 1.97
52 TestErrorSpam/stop 1.45
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 59.07
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 6.13
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.31
64 TestFunctional/serial/CacheCmd/cache/add_local 1.5
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.59
69 TestFunctional/serial/CacheCmd/cache/delete 0.12
70 TestFunctional/serial/MinikubeKubectlCmd 0.15
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
74 TestFunctional/serial/LogsCmd 1.58
78 TestFunctional/parallel/ConfigCmd 0.48
79 TestFunctional/parallel/DashboardCmd 9.35
80 TestFunctional/parallel/DryRun 0.75
81 TestFunctional/parallel/InternationalLanguage 0.31
82 TestFunctional/parallel/StatusCmd 1.34
86 TestFunctional/parallel/ServiceCmdConnect 9.64
87 TestFunctional/parallel/AddonsCmd 0.16
88 TestFunctional/parallel/PersistentVolumeClaim 83.41
90 TestFunctional/parallel/SSHCmd 0.72
91 TestFunctional/parallel/CpCmd 1.78
93 TestFunctional/parallel/FileSync 0.38
94 TestFunctional/parallel/CertSync 2.32
98 TestFunctional/parallel/NodeLabels 0.11
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
102 TestFunctional/parallel/License 0.33
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/ServiceCmd/DeployApp 7.28
110 TestFunctional/parallel/ServiceCmd/List 0.4
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
113 TestFunctional/parallel/ServiceCmd/Format 0.41
114 TestFunctional/parallel/ServiceCmd/URL 0.42
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/ProfileCmd/profile_list 0.41
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
118 TestFunctional/parallel/MountCmd/any-port 8.82
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/MountCmd/specific-port 2.75
124 TestFunctional/parallel/MountCmd/VerifyCleanup 2.54
125 TestFunctional/parallel/Version/short 0.07
126 TestFunctional/parallel/Version/components 1.5
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
131 TestFunctional/parallel/ImageCommands/ImageBuild 2.96
132 TestFunctional/parallel/ImageCommands/Setup 1.77
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
143 TestFunctional/delete_addon-resizer_images 0.09
144 TestFunctional/delete_my-image_image 0.02
145 TestFunctional/delete_minikube_cached_images 0.02
149 TestIngressAddonLegacy/StartLegacyK8sCluster 91.77
151 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.05
152 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.74
156 TestJSONOutput/start/Command 59.24
157 TestJSONOutput/start/Audit 0
159 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/pause/Command 0.8
163 TestJSONOutput/pause/Audit 0
165 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/unpause/Command 0.73
169 TestJSONOutput/unpause/Audit 0
171 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/stop/Command 5.88
175 TestJSONOutput/stop/Audit 0
177 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
179 TestErrorJSONOutput 0.24
181 TestKicCustomNetwork/create_custom_network 47.15
182 TestKicCustomNetwork/use_default_bridge_network 32.87
183 TestKicExistingNetwork 34.49
184 TestKicCustomSubnet 36.69
185 TestKicStaticIP 34.79
186 TestMainNoArgs 0.06
187 TestMinikubeProfile 68.3
190 TestMountStart/serial/StartWithMountFirst 6.54
191 TestMountStart/serial/VerifyMountFirst 0.28
192 TestMountStart/serial/StartWithMountSecond 9.19
193 TestMountStart/serial/VerifyMountSecond 0.29
194 TestMountStart/serial/DeleteFirst 1.68
195 TestMountStart/serial/VerifyMountPostDelete 0.28
196 TestMountStart/serial/Stop 1.21
197 TestMountStart/serial/RestartStopped 7.53
198 TestMountStart/serial/VerifyMountPostStop 0.28
201 TestMultiNode/serial/FreshStart2Nodes 78.26
202 TestMultiNode/serial/DeployApp2Nodes 5.52
203 TestMultiNode/serial/PingHostFrom2Pods 1.14
204 TestMultiNode/serial/AddNode 19.59
205 TestMultiNode/serial/ProfileList 0.35
206 TestMultiNode/serial/CopyFile 11.25
207 TestMultiNode/serial/StopNode 2.42
208 TestMultiNode/serial/StartAfterStop 12.49
209 TestMultiNode/serial/RestartKeepsNodes 117.85
210 TestMultiNode/serial/DeleteNode 5.16
211 TestMultiNode/serial/StopMultiNode 24.18
212 TestMultiNode/serial/RestartMultiNode 82.18
213 TestMultiNode/serial/ValidateNameConflict 32.66
218 TestPreload 184.55
220 TestScheduledStopUnix 110.03
223 TestInsufficientStorage 10.78
224 TestRunningBinaryUpgrade 81.04
226 TestKubernetesUpgrade 389.48
227 TestMissingContainerUpgrade 164.51
229 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
230 TestNoKubernetes/serial/StartWithK8s 43.83
231 TestNoKubernetes/serial/StartWithStopK8s 17.55
232 TestNoKubernetes/serial/Start 9.8
233 TestNoKubernetes/serial/VerifyK8sNotRunning 0.51
234 TestNoKubernetes/serial/ProfileList 6.62
235 TestNoKubernetes/serial/Stop 1.25
236 TestNoKubernetes/serial/StartNoArgs 6.9
237 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
238 TestStoppedBinaryUpgrade/Setup 1.28
239 TestStoppedBinaryUpgrade/Upgrade 96.2
240 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
249 TestPause/serial/Start 56.38
250 TestPause/serial/SecondStartNoReconfiguration 6.6
251 TestPause/serial/Pause 0.9
252 TestPause/serial/VerifyStatus 0.36
253 TestPause/serial/Unpause 0.77
254 TestPause/serial/PauseAgain 0.89
255 TestPause/serial/DeletePaused 2.98
256 TestPause/serial/VerifyDeletedResources 0.36
264 TestNetworkPlugins/group/false 4.88
269 TestStartStop/group/old-k8s-version/serial/FirstStart 124.68
270 TestStartStop/group/old-k8s-version/serial/DeployApp 10.57
271 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
272 TestStartStop/group/old-k8s-version/serial/Stop 12.22
273 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
274 TestStartStop/group/old-k8s-version/serial/SecondStart 667.78
276 TestStartStop/group/no-preload/serial/FirstStart 82.21
277 TestStartStop/group/no-preload/serial/DeployApp 9.65
278 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
279 TestStartStop/group/no-preload/serial/Stop 12.1
280 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
281 TestStartStop/group/no-preload/serial/SecondStart 341.95
282 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
283 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
284 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
285 TestStartStop/group/no-preload/serial/Pause 3.43
287 TestStartStop/group/embed-certs/serial/FirstStart 59.08
288 TestStartStop/group/embed-certs/serial/DeployApp 8.49
289 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
290 TestStartStop/group/embed-certs/serial/Stop 12.15
291 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
292 TestStartStop/group/embed-certs/serial/SecondStart 339.46
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
296 TestStartStop/group/old-k8s-version/serial/Pause 3.54
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.81
299 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.3
301 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.16
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
303 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 349.48
304 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.04
305 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
306 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
307 TestStartStop/group/embed-certs/serial/Pause 3.46
309 TestStartStop/group/newest-cni/serial/FirstStart 46.68
310 TestStartStop/group/newest-cni/serial/DeployApp 0
311 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.4
312 TestStartStop/group/newest-cni/serial/Stop 1.32
313 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/newest-cni/serial/SecondStart 30.32
315 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
318 TestStartStop/group/newest-cni/serial/Pause 3.48
319 TestNetworkPlugins/group/auto/Start 57.53
320 TestNetworkPlugins/group/auto/KubeletFlags 0.32
321 TestNetworkPlugins/group/auto/NetCatPod 9.41
322 TestNetworkPlugins/group/auto/DNS 0.23
323 TestNetworkPlugins/group/auto/Localhost 0.19
324 TestNetworkPlugins/group/auto/HairPin 0.2
325 TestNetworkPlugins/group/kindnet/Start 68.18
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.04
327 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
328 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.55
329 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.54
330 TestNetworkPlugins/group/calico/Start 77.05
331 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
332 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
333 TestNetworkPlugins/group/kindnet/NetCatPod 10.48
334 TestNetworkPlugins/group/kindnet/DNS 0.33
335 TestNetworkPlugins/group/kindnet/Localhost 0.27
336 TestNetworkPlugins/group/kindnet/HairPin 0.24
337 TestNetworkPlugins/group/custom-flannel/Start 63.41
338 TestNetworkPlugins/group/calico/ControllerPod 5.04
339 TestNetworkPlugins/group/calico/KubeletFlags 0.54
340 TestNetworkPlugins/group/calico/NetCatPod 11.66
341 TestNetworkPlugins/group/calico/DNS 0.26
342 TestNetworkPlugins/group/calico/Localhost 0.19
343 TestNetworkPlugins/group/calico/HairPin 0.21
344 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
345 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.52
346 TestNetworkPlugins/group/enable-default-cni/Start 93.66
347 TestNetworkPlugins/group/custom-flannel/DNS 0.39
348 TestNetworkPlugins/group/custom-flannel/Localhost 0.35
349 TestNetworkPlugins/group/custom-flannel/HairPin 0.35
350 TestNetworkPlugins/group/flannel/Start 65.38
351 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
352 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.46
353 TestNetworkPlugins/group/flannel/ControllerPod 5.03
354 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
355 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
356 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
357 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
358 TestNetworkPlugins/group/flannel/NetCatPod 11.36
359 TestNetworkPlugins/group/flannel/DNS 0.26
360 TestNetworkPlugins/group/flannel/Localhost 0.26
361 TestNetworkPlugins/group/flannel/HairPin 0.32
362 TestNetworkPlugins/group/bridge/Start 87.96
363 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
364 TestNetworkPlugins/group/bridge/NetCatPod 9.32
365 TestNetworkPlugins/group/bridge/DNS 0.2
366 TestNetworkPlugins/group/bridge/Localhost 0.17
367 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (19.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-715947 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-715947 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (19.349984954s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-715947
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-715947: exit status 85 (80.037489ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-715947 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |          |
	|         | -p download-only-715947        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:43:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:43:10.506292  498035 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:43:10.506505  498035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:10.506515  498035 out.go:309] Setting ErrFile to fd 2...
	I0914 18:43:10.506522  498035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:10.507166  498035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	W0914 18:43:10.507436  498035 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17217-492678/.minikube/config/config.json: open /home/jenkins/minikube-integration/17217-492678/.minikube/config/config.json: no such file or directory
	I0914 18:43:10.507997  498035 out.go:303] Setting JSON to true
	I0914 18:43:10.508934  498035 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15934,"bootTime":1694701057,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:43:10.509073  498035 start.go:138] virtualization:  
	I0914 18:43:10.512064  498035 out.go:97] [download-only-715947] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:43:10.514048  498035 out.go:169] MINIKUBE_LOCATION=17217
	W0914 18:43:10.512306  498035 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 18:43:10.512375  498035 notify.go:220] Checking for updates...
	I0914 18:43:10.517615  498035 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:43:10.519553  498035 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:43:10.521339  498035 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:43:10.523108  498035 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 18:43:10.527518  498035 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 18:43:10.527779  498035 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:43:10.551906  498035 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:43:10.551986  498035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:43:10.631867  498035 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2023-09-14 18:43:10.622400437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:43:10.631974  498035 docker.go:294] overlay module found
	I0914 18:43:10.633989  498035 out.go:97] Using the docker driver based on user configuration
	I0914 18:43:10.634040  498035 start.go:298] selected driver: docker
	I0914 18:43:10.634051  498035 start.go:902] validating driver "docker" against <nil>
	I0914 18:43:10.634173  498035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:43:10.700360  498035 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2023-09-14 18:43:10.69049116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:43:10.700530  498035 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 18:43:10.700871  498035 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 18:43:10.701026  498035 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 18:43:10.703292  498035 out.go:169] Using Docker driver with root privileges
	I0914 18:43:10.705245  498035 cni.go:84] Creating CNI manager for ""
	I0914 18:43:10.705265  498035 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:43:10.705278  498035 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 18:43:10.705293  498035 start_flags.go:321] config:
	{Name:download-only-715947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-715947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:43:10.707408  498035 out.go:97] Starting control plane node download-only-715947 in cluster download-only-715947
	I0914 18:43:10.707425  498035 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0914 18:43:10.709365  498035 out.go:97] Pulling base image ...
	I0914 18:43:10.709386  498035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0914 18:43:10.709528  498035 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0914 18:43:10.726373  498035 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0914 18:43:10.726544  498035 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory
	I0914 18:43:10.726648  498035 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0914 18:43:10.769237  498035 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0914 18:43:10.769269  498035 cache.go:57] Caching tarball of preloaded images
	I0914 18:43:10.770435  498035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0914 18:43:10.772599  498035 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0914 18:43:10.772618  498035 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:43:10.897110  498035 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0914 18:43:18.189472  498035 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 as a tarball
	I0914 18:43:22.243555  498035 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:43:22.243667  498035 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:43:23.350833  498035 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0914 18:43:23.351220  498035 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/download-only-715947/config.json ...
	I0914 18:43:23.351255  498035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/download-only-715947/config.json: {Name:mk0a86bbebd746b101fa134ce8c031b2b9f7328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:43:23.351456  498035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0914 18:43:23.351639  498035 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17217-492678/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-715947"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (15.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-715947 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-715947 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (15.762241562s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (15.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-715947
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-715947: exit status 85 (76.781563ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-715947 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |          |
	|         | -p download-only-715947        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-715947 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |          |
	|         | -p download-only-715947        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:43:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:43:29.937714  498111 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:43:29.937873  498111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:29.937881  498111 out.go:309] Setting ErrFile to fd 2...
	I0914 18:43:29.937887  498111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:29.938138  498111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	W0914 18:43:29.938283  498111 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17217-492678/.minikube/config/config.json: open /home/jenkins/minikube-integration/17217-492678/.minikube/config/config.json: no such file or directory
	I0914 18:43:29.938531  498111 out.go:303] Setting JSON to true
	I0914 18:43:29.939347  498111 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15953,"bootTime":1694701057,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:43:29.939420  498111 start.go:138] virtualization:  
	I0914 18:43:29.941970  498111 out.go:97] [download-only-715947] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:43:29.944048  498111 out.go:169] MINIKUBE_LOCATION=17217
	I0914 18:43:29.942297  498111 notify.go:220] Checking for updates...
	I0914 18:43:29.946321  498111 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:43:29.947990  498111 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:43:29.949958  498111 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:43:29.952103  498111 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 18:43:29.955943  498111 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 18:43:29.956500  498111 config.go:182] Loaded profile config "download-only-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0914 18:43:29.956571  498111 start.go:810] api.Load failed for download-only-715947: filestore "download-only-715947": Docker machine "download-only-715947" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 18:43:29.956723  498111 driver.go:373] Setting default libvirt URI to qemu:///system
	W0914 18:43:29.956761  498111 start.go:810] api.Load failed for download-only-715947: filestore "download-only-715947": Docker machine "download-only-715947" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 18:43:29.980015  498111 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:43:29.980115  498111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:43:30.071642  498111 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-14 18:43:30.058431967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:43:30.071771  498111 docker.go:294] overlay module found
	I0914 18:43:30.074169  498111 out.go:97] Using the docker driver based on existing profile
	I0914 18:43:30.074232  498111 start.go:298] selected driver: docker
	I0914 18:43:30.074241  498111 start.go:902] validating driver "docker" against &{Name:download-only-715947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-715947 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:43:30.074446  498111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:43:30.151771  498111 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-14 18:43:30.139693064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:43:30.152307  498111 cni.go:84] Creating CNI manager for ""
	I0914 18:43:30.152341  498111 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:43:30.152354  498111 start_flags.go:321] config:
	{Name:download-only-715947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-715947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s}
	I0914 18:43:30.155334  498111 out.go:97] Starting control plane node download-only-715947 in cluster download-only-715947
	I0914 18:43:30.155373  498111 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0914 18:43:30.157392  498111 out.go:97] Pulling base image ...
	I0914 18:43:30.157443  498111 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:43:30.157860  498111 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0914 18:43:30.177327  498111 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0914 18:43:30.177479  498111 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory
	I0914 18:43:30.177507  498111 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory, skipping pull
	I0914 18:43:30.177515  498111 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in cache, skipping pull
	I0914 18:43:30.177524  498111 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 as a tarball
	I0914 18:43:30.286748  498111 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4
	I0914 18:43:30.286817  498111 cache.go:57] Caching tarball of preloaded images
	I0914 18:43:30.287005  498111 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:43:30.289328  498111 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0914 18:43:30.289355  498111 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:43:30.443191  498111 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:a95a45d80ac0b4b5848efd127ce0fe53 -> /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4
	I0914 18:43:41.467692  498111 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:43:41.467826  498111 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17217-492678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 18:43:42.386797  498111 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on containerd
	I0914 18:43:42.386943  498111 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/download-only-715947/config.json ...
	I0914 18:43:42.387156  498111 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0914 18:43:42.387902  498111 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17217-492678/.minikube/cache/linux/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-715947"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-715947
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-080763 --alsologtostderr --binary-mirror http://127.0.0.1:43515 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-080763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-080763
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/Setup (136.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-531284 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-531284 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m16.633360576s)
--- PASS: TestAddons/Setup (136.63s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 31.809421ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-shj4x" [0364cf43-d68c-44d4-8ef3-8a69c44bd62b] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0277943s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4vs8q" [7cadbedb-cc49-4ca9-9df2-fd1e7019f21e] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015202326s
addons_test.go:316: (dbg) Run:  kubectl --context addons-531284 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-531284 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-531284 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.818343534s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 ip
2023/09/14 18:46:19 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kcm25" [9b71efbd-cf9a-4ca8-ab00-02ea493cf5b9] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011870129s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-531284
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-531284: (6.115857625s)
--- PASS: TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.703121ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-9zr98" [7582499e-b5c4-46d2-b45e-b3d4b0c91902] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014260333s
addons_test.go:391: (dbg) Run:  kubectl --context addons-531284 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p addons-531284 addons disable metrics-server --alsologtostderr -v=1: (1.008473108s)
--- PASS: TestAddons/parallel/MetricsServer (6.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.755165ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-531284 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-531284 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [774a9659-6c02-4955-a395-2638eb381b54] Pending
helpers_test.go:344: "task-pv-pod" [774a9659-6c02-4955-a395-2638eb381b54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [774a9659-6c02-4955-a395-2638eb381b54] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.032989599s
addons_test.go:560: (dbg) Run:  kubectl --context addons-531284 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-531284 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-531284 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-531284 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-531284 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-531284 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-531284 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-531284 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c811cb57-3dcc-4713-94ca-e0fa04946671] Pending
helpers_test.go:344: "task-pv-pod-restore" [c811cb57-3dcc-4713-94ca-e0fa04946671] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c811cb57-3dcc-4713-94ca-e0fa04946671] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.018144047s
addons_test.go:602: (dbg) Run:  kubectl --context addons-531284 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-531284 delete pod task-pv-pod-restore: (1.146237832s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-531284 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-531284 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-531284 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.141997113s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-531284 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:618: (dbg) Done: out/minikube-linux-arm64 -p addons-531284 addons disable volumesnapshots --alsologtostderr -v=1: (1.030282164s)
--- PASS: TestAddons/parallel/CSI (54.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-531284 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-531284 --alsologtostderr -v=1: (1.642274954s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-dw57t" [01210cf4-f4ea-4b06-8e57-ac224da5cfb1] Pending
helpers_test.go:344: "headlamp-699c48fb74-dw57t" [01210cf4-f4ea-4b06-8e57-ac224da5cfb1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-dw57t" [01210cf4-f4ea-4b06-8e57-ac224da5cfb1] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.044175489s
--- PASS: TestAddons/parallel/Headlamp (11.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-vhftc" [25830f55-f12d-4a45-9eee-72f2eb0fbcc3] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.017925413s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-531284
--- PASS: TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-531284 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-531284 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-531284
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-531284: (12.076892273s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-531284
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-531284
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-531284
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (37.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-497578 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-497578 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.458721143s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-497578 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-497578 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-497578 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-497578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-497578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-497578: (2.059387517s)
--- PASS: TestCertOptions (37.25s)

                                                
                                    
x
+
TestCertExpiration (228.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-117609 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-117609 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.609660549s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-117609 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-117609 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.690377511s)
helpers_test.go:175: Cleaning up "cert-expiration-117609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-117609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-117609: (2.538628488s)
--- PASS: TestCertExpiration (228.84s)

                                                
                                    
x
+
TestForceSystemdFlag (44.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-730860 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0914 19:21:04.128161  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:21:09.444399  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-730860 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.800786251s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-730860 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-730860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-730860
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-730860: (2.155362274s)
--- PASS: TestForceSystemdFlag (44.40s)

                                                
                                    
x
+
TestForceSystemdEnv (43.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-077383 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-077383 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.114006151s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-077383 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-077383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-077383
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-077383: (2.144780184s)
--- PASS: TestForceSystemdEnv (43.68s)

                                                
                                    
x
+
TestDockerEnvContainerd (50.99s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-032526 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-032526 --driver=docker  --container-runtime=containerd: (34.413842228s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-032526"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-032526": (1.230486562s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-X5GGHzrZFk31/agent.513382" SSH_AGENT_PID="513383" DOCKER_HOST=ssh://docker@127.0.0.1:33397 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-X5GGHzrZFk31/agent.513382" SSH_AGENT_PID="513383" DOCKER_HOST=ssh://docker@127.0.0.1:33397 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-X5GGHzrZFk31/agent.513382" SSH_AGENT_PID="513383" DOCKER_HOST=ssh://docker@127.0.0.1:33397 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.836535192s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-X5GGHzrZFk31/agent.513382" SSH_AGENT_PID="513383" DOCKER_HOST=ssh://docker@127.0.0.1:33397 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-032526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-032526
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-032526: (2.086318157s)
--- PASS: TestDockerEnvContainerd (50.99s)

                                                
                                    
x
+
TestErrorSpam/setup (31.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-700163 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-700163 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-700163 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-700163 --driver=docker  --container-runtime=containerd: (31.827390965s)
--- PASS: TestErrorSpam/setup (31.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 unpause
--- PASS: TestErrorSpam/unpause (1.97s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 stop: (1.249816532s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-700163 --log_dir /tmp/nospam-700163 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17217-492678/.minikube/files/etc/test/nested/copy/498029/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-759345 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-759345 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (59.068818004s)
--- PASS: TestFunctional/serial/StartWithProxy (59.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-759345 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-759345 --alsologtostderr -v=8: (6.125708151s)
functional_test.go:659: soft start took 6.126234807s for "functional-759345" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-759345 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 cache add registry.k8s.io/pause:3.1: (1.514400354s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 cache add registry.k8s.io/pause:3.3: (1.424032042s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 cache add registry.k8s.io/pause:latest: (1.374652336s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-759345 /tmp/TestFunctionalserialCacheCmdcacheadd_local99922006/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cache add minikube-local-cache-test:functional-759345
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 cache add minikube-local-cache-test:functional-759345: (1.028277476s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cache delete minikube-local-cache-test:functional-759345
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-759345
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (322.344188ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 cache reload: (1.577529149s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 kubectl -- --context functional-759345 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-759345 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 logs
E0914 18:51:04.444432  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:04.764904  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:51:05.405234  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 logs: (1.578199198s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 config get cpus: exit status 14 (79.517609ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 config get cpus: exit status 14 (87.014883ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-759345 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-759345 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 527541: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-759345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-759345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (333.219994ms)

                                                
                                                
-- stdout --
	* [functional-759345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:53:00.960633  527035 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:53:00.960850  527035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:00.960860  527035 out.go:309] Setting ErrFile to fd 2...
	I0914 18:53:00.960867  527035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:00.961167  527035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:53:00.962867  527035 out.go:303] Setting JSON to false
	I0914 18:53:00.967321  527035 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16524,"bootTime":1694701057,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:53:00.967408  527035 start.go:138] virtualization:  
	I0914 18:53:00.970176  527035 out.go:177] * [functional-759345] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 18:53:00.974251  527035 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:53:00.977529  527035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:53:00.974439  527035 notify.go:220] Checking for updates...
	I0914 18:53:00.984121  527035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:53:00.988204  527035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:53:00.993383  527035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:53:00.995591  527035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:53:00.998537  527035 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:53:00.999065  527035 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:53:01.040803  527035 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:53:01.040892  527035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:53:01.162802  527035 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-14 18:53:01.151289322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:53:01.162902  527035 docker.go:294] overlay module found
	I0914 18:53:01.165898  527035 out.go:177] * Using the docker driver based on existing profile
	I0914 18:53:01.168397  527035 start.go:298] selected driver: docker
	I0914 18:53:01.168419  527035 start.go:902] validating driver "docker" against &{Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:53:01.168526  527035 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:53:01.171827  527035 out.go:177] 
	W0914 18:53:01.174512  527035 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 18:53:01.177056  527035 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-759345 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-759345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-759345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (309.98671ms)

                                                
                                                
-- stdout --
	* [functional-759345] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:53:00.616225  526973 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:53:00.616475  526973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:00.616503  526973 out.go:309] Setting ErrFile to fd 2...
	I0914 18:53:00.616522  526973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:53:00.616932  526973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 18:53:00.617377  526973 out.go:303] Setting JSON to false
	I0914 18:53:00.618572  526973 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16524,"bootTime":1694701057,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:53:00.618683  526973 start.go:138] virtualization:  
	I0914 18:53:00.621618  526973 out.go:177] * [functional-759345] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0914 18:53:00.624920  526973 notify.go:220] Checking for updates...
	I0914 18:53:00.627796  526973 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:53:00.630492  526973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:53:00.633204  526973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 18:53:00.635395  526973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 18:53:00.637687  526973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:53:00.639873  526973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:53:00.642774  526973 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 18:53:00.643367  526973 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:53:00.690052  526973 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 18:53:00.690151  526973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:53:00.833285  526973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-14 18:53:00.822707202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 18:53:00.833394  526973 docker.go:294] overlay module found
	I0914 18:53:00.837099  526973 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0914 18:53:00.839784  526973 start.go:298] selected driver: docker
	I0914 18:53:00.839809  526973 start.go:902] validating driver "docker" against &{Name:functional-759345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-759345 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:53:00.839928  526973 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:53:00.842609  526973 out.go:177] 
	W0914 18:53:00.844700  526973 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 18:53:00.846843  526973 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-759345 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-759345 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-mjv98" [f127be39-edeb-48da-bba2-54f2e0a55e1f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-mjv98" [f127be39-edeb-48da-bba2-54f2e0a55e1f] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.014420531s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30588
functional_test.go:1674: http://192.168.49.2:30588: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-mjv98

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30588
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (83.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E0914 18:51:14.371138  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:344: "storage-provisioner" [4525561a-da21-495e-b7d3-5515c83d50df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
helpers_test.go:344: "storage-provisioner" [4525561a-da21-495e-b7d3-5515c83d50df] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 15.00529078s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-759345 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-759345 apply -f testdata/storage-provisioner/pvc.yaml
E0914 18:51:24.611336  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-759345 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-759345 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-759345 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-759345 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-759345 get pvc myclaim -o=json
E0914 18:51:45.092235  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-759345 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-759345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff6cdfe9-a423-4b00-a6ba-a07970288b10] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..)
helpers_test.go:344: "sp-pod" [ff6cdfe9-a423-4b00-a6ba-a07970288b10] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff6cdfe9-a423-4b00-a6ba-a07970288b10] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 37.009520829s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-759345 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-759345 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-759345 delete -f testdata/storage-provisioner/pod.yaml: (1.662604412s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-759345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8a5f5668-b685-4712-94a6-8af22abe175f] Pending
E0914 18:52:26.052781  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [8a5f5668-b685-4712-94a6-8af22abe175f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.017923039s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-759345 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (83.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh -n functional-759345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 cp functional-759345:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd401887832/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh -n functional-759345 "sudo cat /home/docker/cp-test.txt"
E0914 18:51:09.250765  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/498029/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /etc/test/nested/copy/498029/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/498029.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /etc/ssl/certs/498029.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/498029.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /usr/share/ca-certificates/498029.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4980292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /etc/ssl/certs/4980292.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4980292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /usr/share/ca-certificates/4980292.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-759345 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh "sudo systemctl is-active docker": exit status 1 (423.144505ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh "sudo systemctl is-active crio": exit status 1 (361.099648ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-759345 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-759345 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-ksjgq" [11db74ec-8834-4a7a-8f25-a382c13b7d7b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-ksjgq" [11db74ec-8834-4a7a-8f25-a382c13b7d7b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.025184578s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 service list -o json
functional_test.go:1493: Took "397.171363ms" to run "out/minikube-linux-arm64 -p functional-759345 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31799
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31799
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "351.306282ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "56.99049ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "362.154209ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "57.948326ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdany-port1673230971/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694717573257479312" to /tmp/TestFunctionalparallelMountCmdany-port1673230971/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694717573257479312" to /tmp/TestFunctionalparallelMountCmdany-port1673230971/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694717573257479312" to /tmp/TestFunctionalparallelMountCmdany-port1673230971/001/test-1694717573257479312
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.421276ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 18:52 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 18:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 18:52 test-1694717573257479312
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh cat /mount-9p/test-1694717573257479312
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-759345 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b1a11729-3ba2-4652-84b0-7398305b3890] Pending
helpers_test.go:344: "busybox-mount" [b1a11729-3ba2-4652-84b0-7398305b3890] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b1a11729-3ba2-4652-84b0-7398305b3890] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b1a11729-3ba2-4652-84b0-7398305b3890] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.016535546s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-759345 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdany-port1673230971/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-759345 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdspecific-port1009787461/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (616.37682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdspecific-port1009787461/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh "sudo umount -f /mount-9p": exit status 1 (334.752904ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-759345 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdspecific-port1009787461/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2796834026/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2796834026/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2796834026/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T" /mount1: exit status 1 (854.696546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-759345 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2796834026/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2796834026/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-759345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2796834026/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 version -o=json --components: (1.500707436s)
--- PASS: TestFunctional/parallel/Version/components (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-759345 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-759345
docker.io/kindest/kindnetd:v20230809-80a64d96
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-759345 image ls --format short --alsologtostderr:
I0914 18:53:26.304232  529392 out.go:296] Setting OutFile to fd 1 ...
I0914 18:53:26.304735  529392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.304747  529392 out.go:309] Setting ErrFile to fd 2...
I0914 18:53:26.304753  529392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.305131  529392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
I0914 18:53:26.305842  529392 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.305955  529392 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.306512  529392 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
I0914 18:53:26.332013  529392 ssh_runner.go:195] Run: systemctl --version
I0914 18:53:26.332072  529392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
I0914 18:53:26.363879  529392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
I0914 18:53:26.481051  529392 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-759345 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:91582c | 67.2MB |
| registry.k8s.io/kube-proxy                  | v1.28.1            | sha256:812f52 | 22MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-759345  | sha256:ed4881 | 1.01kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b18bf7 | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.1            | sha256:b4a5a5 | 17.1MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.1            | sha256:b29fb6 | 31.5MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1            | sha256:8b6e19 | 30.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-759345 image ls --format table --alsologtostderr:
I0914 18:53:26.945197  529521 out.go:296] Setting OutFile to fd 1 ...
I0914 18:53:26.945523  529521 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.945556  529521 out.go:309] Setting ErrFile to fd 2...
I0914 18:53:26.945578  529521 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.945873  529521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
I0914 18:53:26.946644  529521 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.946823  529521 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.947371  529521 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
I0914 18:53:26.974988  529521 ssh_runner.go:195] Run: systemctl --version
I0914 18:53:26.975054  529521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
I0914 18:53:26.999872  529521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
I0914 18:53:27.099083  529521 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-759345 image ls --format json --alsologtostderr:
[{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"25334607"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f5
88b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153"],"repoTags":["docker.io/library/nginx:latest"],"size":"67190207"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"31519813"},{"id":"sha256:812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":["registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1
a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"21974303"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ed4881a611afbb1f65ad0126bcae0f4b7614395e4dd617862414c3f81bdf48b9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-759345"],"size":"1006"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikub
e/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"30330541"},{"id":"sha256:b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"17052956"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["regis
try.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-759345 image ls --format json --alsologtostderr:
I0914 18:53:26.657484  529451 out.go:296] Setting OutFile to fd 1 ...
I0914 18:53:26.659168  529451 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.659183  529451 out.go:309] Setting ErrFile to fd 2...
I0914 18:53:26.659191  529451 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.659621  529451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
I0914 18:53:26.660863  529451 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.661046  529451 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.661925  529451 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
I0914 18:53:26.687121  529451 ssh_runner.go:195] Run: systemctl --version
I0914 18:53:26.687176  529451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
I0914 18:53:26.709737  529451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
I0914 18:53:26.809510  529451 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-759345 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
repoTags:
- docker.io/library/nginx:latest
size: "67190207"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "31519813"
- id: sha256:812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "21974303"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "17052956"
- id: sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "25334607"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "30330541"
- id: sha256:ed4881a611afbb1f65ad0126bcae0f4b7614395e4dd617862414c3f81bdf48b9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-759345
size: "1006"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-759345 image ls --format yaml --alsologtostderr:
I0914 18:53:26.290586  529391 out.go:296] Setting OutFile to fd 1 ...
I0914 18:53:26.290909  529391 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.290945  529391 out.go:309] Setting ErrFile to fd 2...
I0914 18:53:26.290966  529391 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:26.291241  529391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
I0914 18:53:26.291965  529391 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.292167  529391 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:26.298881  529391 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
I0914 18:53:26.338306  529391 ssh_runner.go:195] Run: systemctl --version
I0914 18:53:26.338356  529391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
I0914 18:53:26.376341  529391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
I0914 18:53:26.485203  529391 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-759345 ssh pgrep buildkitd: exit status 1 (407.385068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image build -t localhost/my-image:functional-759345 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-759345 image build -t localhost/my-image:functional-759345 testdata/build --alsologtostderr: (2.305599763s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-759345 image build -t localhost/my-image:functional-759345 testdata/build --alsologtostderr:
I0914 18:53:27.042006  529534 out.go:296] Setting OutFile to fd 1 ...
I0914 18:53:27.042665  529534 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:27.042680  529534 out.go:309] Setting ErrFile to fd 2...
I0914 18:53:27.042686  529534 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:53:27.042962  529534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
I0914 18:53:27.043711  529534 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:27.044396  529534 config.go:182] Loaded profile config "functional-759345": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0914 18:53:27.044962  529534 cli_runner.go:164] Run: docker container inspect functional-759345 --format={{.State.Status}}
I0914 18:53:27.067043  529534 ssh_runner.go:195] Run: systemctl --version
I0914 18:53:27.067101  529534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-759345
I0914 18:53:27.086667  529534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/functional-759345/id_rsa Username:docker}
I0914 18:53:27.194513  529534 build_images.go:151] Building image from path: /tmp/build.1774696037.tar
I0914 18:53:27.194580  529534 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 18:53:27.205568  529534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1774696037.tar
I0914 18:53:27.210247  529534 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1774696037.tar: stat -c "%s %y" /var/lib/minikube/build/build.1774696037.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1774696037.tar': No such file or directory
I0914 18:53:27.210277  529534 ssh_runner.go:362] scp /tmp/build.1774696037.tar --> /var/lib/minikube/build/build.1774696037.tar (3072 bytes)
I0914 18:53:27.241380  529534 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1774696037
I0914 18:53:27.252428  529534 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1774696037 -xf /var/lib/minikube/build/build.1774696037.tar
I0914 18:53:27.264172  529534 containerd.go:378] Building image: /var/lib/minikube/build/build.1774696037
I0914 18:53:27.264264  529534 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1774696037 --local dockerfile=/var/lib/minikube/build/build.1774696037 --output type=image,name=localhost/my-image:functional-759345
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:f7981e651f27b249a40d2f95f476b653c790b7ae4fe2f0dd3579cfb55b5afb3a 0.0s done
#8 exporting config sha256:716fd8117b7bba0a059d6b20695c0c7e766efe83485d649758c2f0ca5183f14a 0.0s done
#8 naming to localhost/my-image:functional-759345 done
#8 DONE 0.1s
I0914 18:53:29.237338  529534 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1774696037 --local dockerfile=/var/lib/minikube/build/build.1774696037 --output type=image,name=localhost/my-image:functional-759345: (1.973048941s)
I0914 18:53:29.237418  529534 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1774696037
I0914 18:53:29.249740  529534 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1774696037.tar
I0914 18:53:29.260942  529534 build_images.go:207] Built localhost/my-image:functional-759345 from /tmp/build.1774696037.tar
I0914 18:53:29.260975  529534 build_images.go:123] succeeded building to: functional-759345
I0914 18:53:29.260980  529534 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.750635068s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-759345
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image rm gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-759345
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-759345 image save --daemon gcr.io/google-containers/addon-resizer:functional-759345 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-759345
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-759345
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-759345
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-759345
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (91.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-480282 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0914 18:53:47.973790  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-480282 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m31.766181566s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (91.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons enable ingress --alsologtostderr -v=5: (10.054176516s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480282 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-739921 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0914 18:56:14.565015  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:19.686156  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:29.926390  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 18:56:31.814009  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 18:56:50.406667  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-739921 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.234700598s)
--- PASS: TestJSONOutput/start/Command (59.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-739921 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-739921 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-739921 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-739921 --output=json --user=testUser: (5.882611565s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-297489 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-297489 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.377942ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4000e1c-376c-4f63-ac17-15a1c09033cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-297489] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9259119a-6c62-4c32-a879-202c3a788bd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17217"}}
	{"specversion":"1.0","id":"44b2cf0e-b517-417a-9bdd-32d42afddd0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad16a149-eca3-4589-88f5-db3d28c87b90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig"}}
	{"specversion":"1.0","id":"4ed1c8e7-ffb1-422d-9e3f-e503478d0762","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube"}}
	{"specversion":"1.0","id":"9e603c37-1329-4647-af65-da33d42dc6bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e30cc5ac-ab7f-4759-a763-20feb2ed4780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fbc3370a-baa4-440f-92c9-1536e839c4e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-297489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-297489
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (47.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-260991 --network=
E0914 18:57:31.367645  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-260991 --network=: (45.080502096s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-260991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-260991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-260991: (2.042571213s)
--- PASS: TestKicCustomNetwork/create_custom_network (47.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-719255 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-719255 --network=bridge: (30.874549584s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-719255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-719255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-719255: (1.96745598s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.87s)

                                                
                                    
x
+
TestKicExistingNetwork (34.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-034472 --network=existing-network
E0914 18:58:53.288715  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-034472 --network=existing-network: (32.342362833s)
helpers_test.go:175: Cleaning up "existing-network-034472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-034472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-034472: (1.957093229s)
--- PASS: TestKicExistingNetwork (34.49s)

                                                
                                    
x
+
TestKicCustomSubnet (36.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-622514 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-622514 --subnet=192.168.60.0/24: (34.597230278s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-622514 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-622514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-622514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-622514: (2.061163342s)
--- PASS: TestKicCustomSubnet (36.69s)

                                                
                                    
x
+
TestKicStaticIP (34.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-156632 --static-ip=192.168.200.200
E0914 19:00:14.993492  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:14.998794  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:15.010476  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:15.030741  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:15.071004  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:15.151303  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:15.311726  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:15.632265  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:16.273160  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:17.553377  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:20.113712  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:25.234329  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-156632 --static-ip=192.168.200.200: (32.532582251s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-156632 ip
helpers_test.go:175: Cleaning up "static-ip-156632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-156632
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-156632: (2.087413233s)
--- PASS: TestKicStaticIP (34.79s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-032722 --driver=docker  --container-runtime=containerd
E0914 19:00:35.474572  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:00:55.954782  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-032722 --driver=docker  --container-runtime=containerd: (31.12500685s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-035356 --driver=docker  --container-runtime=containerd
E0914 19:01:04.127059  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:01:09.448750  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-035356 --driver=docker  --container-runtime=containerd: (31.619402355s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-032722
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-035356
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-035356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-035356
E0914 19:01:36.915044  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:01:37.129375  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-035356: (1.99508608s)
helpers_test.go:175: Cleaning up "first-032722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-032722
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-032722: (2.28390372s)
--- PASS: TestMinikubeProfile (68.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-348169 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-348169 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.534958805s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-348169 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-350057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-350057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.186472133s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-350057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-348169 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-348169 --alsologtostderr -v=5: (1.678403989s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-350057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-350057
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-350057: (1.212803575s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-350057
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-350057: (6.530709381s)
--- PASS: TestMountStart/serial/RestartStopped (7.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-350057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-587826 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 19:02:58.835839  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-587826 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.651264769s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-587826 -- rollout status deployment/busybox: (3.407160212s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-6tbkl -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-lk6v2 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-6tbkl -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-lk6v2 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-6tbkl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-lk6v2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.52s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-6tbkl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-6tbkl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-lk6v2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-587826 -- exec busybox-5bc68d56bd-lk6v2 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.14s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-587826 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-587826 -v 3 --alsologtostderr: (18.780563645s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.59s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp testdata/cp-test.txt multinode-587826:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile376490134/001/cp-test_multinode-587826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826:/home/docker/cp-test.txt multinode-587826-m02:/home/docker/cp-test_multinode-587826_multinode-587826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m02 "sudo cat /home/docker/cp-test_multinode-587826_multinode-587826-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826:/home/docker/cp-test.txt multinode-587826-m03:/home/docker/cp-test_multinode-587826_multinode-587826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m03 "sudo cat /home/docker/cp-test_multinode-587826_multinode-587826-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp testdata/cp-test.txt multinode-587826-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile376490134/001/cp-test_multinode-587826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826-m02:/home/docker/cp-test.txt multinode-587826:/home/docker/cp-test_multinode-587826-m02_multinode-587826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826 "sudo cat /home/docker/cp-test_multinode-587826-m02_multinode-587826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826-m02:/home/docker/cp-test.txt multinode-587826-m03:/home/docker/cp-test_multinode-587826-m02_multinode-587826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m03 "sudo cat /home/docker/cp-test_multinode-587826-m02_multinode-587826-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp testdata/cp-test.txt multinode-587826-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile376490134/001/cp-test_multinode-587826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826-m03:/home/docker/cp-test.txt multinode-587826:/home/docker/cp-test_multinode-587826-m03_multinode-587826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826 "sudo cat /home/docker/cp-test_multinode-587826-m03_multinode-587826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 cp multinode-587826-m03:/home/docker/cp-test.txt multinode-587826-m02:/home/docker/cp-test_multinode-587826-m03_multinode-587826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 ssh -n multinode-587826-m02 "sudo cat /home/docker/cp-test_multinode-587826-m03_multinode-587826-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-587826 node stop m03: (1.245328077s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-587826 status: exit status 7 (589.062552ms)

                                                
                                                
-- stdout --
	multinode-587826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-587826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-587826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr: exit status 7 (582.086843ms)

                                                
                                                
-- stdout --
	multinode-587826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-587826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-587826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:04:06.567187  576759 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:04:06.567337  576759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:04:06.567346  576759 out.go:309] Setting ErrFile to fd 2...
	I0914 19:04:06.567353  576759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:04:06.567805  576759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 19:04:06.568050  576759 out.go:303] Setting JSON to false
	I0914 19:04:06.568100  576759 mustload.go:65] Loading cluster: multinode-587826
	I0914 19:04:06.569083  576759 config.go:182] Loaded profile config "multinode-587826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 19:04:06.569137  576759 status.go:255] checking status of multinode-587826 ...
	I0914 19:04:06.569884  576759 cli_runner.go:164] Run: docker container inspect multinode-587826 --format={{.State.Status}}
	I0914 19:04:06.572925  576759 notify.go:220] Checking for updates...
	I0914 19:04:06.596096  576759 status.go:330] multinode-587826 host status = "Running" (err=<nil>)
	I0914 19:04:06.596127  576759 host.go:66] Checking if "multinode-587826" exists ...
	I0914 19:04:06.596499  576759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-587826
	I0914 19:04:06.615202  576759 host.go:66] Checking if "multinode-587826" exists ...
	I0914 19:04:06.615526  576759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 19:04:06.615574  576759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-587826
	I0914 19:04:06.637015  576759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/multinode-587826/id_rsa Username:docker}
	I0914 19:04:06.736171  576759 ssh_runner.go:195] Run: systemctl --version
	I0914 19:04:06.742996  576759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:04:06.759406  576759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 19:04:06.846306  576759 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 19:04:06.834528465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 19:04:06.846969  576759 kubeconfig.go:92] found "multinode-587826" server: "https://192.168.58.2:8443"
	I0914 19:04:06.847009  576759 api_server.go:166] Checking apiserver status ...
	I0914 19:04:06.847061  576759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:04:06.861080  576759 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1313/cgroup
	I0914 19:04:06.873522  576759 api_server.go:182] apiserver freezer: "11:freezer:/docker/c9a2aced1c06fa884e02f9690af4318b853de4cc9ec3630062e9fb2fcc6fedf5/kubepods/burstable/podfbad2a9f24e407859a5c20821dea8e96/cfa8ec95835b956c2a7c5bc8429dd3bb6dcf24064b3e94232ec83e3d24480546"
	I0914 19:04:06.873602  576759 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c9a2aced1c06fa884e02f9690af4318b853de4cc9ec3630062e9fb2fcc6fedf5/kubepods/burstable/podfbad2a9f24e407859a5c20821dea8e96/cfa8ec95835b956c2a7c5bc8429dd3bb6dcf24064b3e94232ec83e3d24480546/freezer.state
	I0914 19:04:06.884522  576759 api_server.go:204] freezer state: "THAWED"
	I0914 19:04:06.884550  576759 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0914 19:04:06.893865  576759 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0914 19:04:06.893897  576759 status.go:421] multinode-587826 apiserver status = Running (err=<nil>)
	I0914 19:04:06.893914  576759 status.go:257] multinode-587826 status: &{Name:multinode-587826 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:04:06.893964  576759 status.go:255] checking status of multinode-587826-m02 ...
	I0914 19:04:06.894286  576759 cli_runner.go:164] Run: docker container inspect multinode-587826-m02 --format={{.State.Status}}
	I0914 19:04:06.916868  576759 status.go:330] multinode-587826-m02 host status = "Running" (err=<nil>)
	I0914 19:04:06.916912  576759 host.go:66] Checking if "multinode-587826-m02" exists ...
	I0914 19:04:06.917370  576759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-587826-m02
	I0914 19:04:06.935953  576759 host.go:66] Checking if "multinode-587826-m02" exists ...
	I0914 19:04:06.936263  576759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 19:04:06.936313  576759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-587826-m02
	I0914 19:04:06.955216  576759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33477 SSHKeyPath:/home/jenkins/minikube-integration/17217-492678/.minikube/machines/multinode-587826-m02/id_rsa Username:docker}
	I0914 19:04:07.055130  576759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:04:07.068891  576759 status.go:257] multinode-587826-m02 status: &{Name:multinode-587826-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:04:07.068937  576759 status.go:255] checking status of multinode-587826-m03 ...
	I0914 19:04:07.069240  576759 cli_runner.go:164] Run: docker container inspect multinode-587826-m03 --format={{.State.Status}}
	I0914 19:04:07.087778  576759 status.go:330] multinode-587826-m03 host status = "Stopped" (err=<nil>)
	I0914 19:04:07.087798  576759 status.go:343] host is not running, skipping remaining checks
	I0914 19:04:07.087805  576759 status.go:257] multinode-587826-m03 status: &{Name:multinode-587826-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-587826 node start m03 --alsologtostderr: (11.653960974s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-587826
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-587826
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-587826: (25.17472739s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-587826 --wait=true -v=8 --alsologtostderr
E0914 19:05:14.993482  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:05:42.676632  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:06:04.127050  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:06:09.443977  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-587826 --wait=true -v=8 --alsologtostderr: (1m32.540445144s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-587826
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-587826 node delete m03: (4.394898548s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-587826 stop: (23.99352671s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-587826 status: exit status 7 (92.728612ms)

                                                
                                                
-- stdout --
	multinode-587826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-587826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr: exit status 7 (92.920021ms)

                                                
                                                
-- stdout --
	multinode-587826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-587826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:06:46.736361  585330 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:06:46.736560  585330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:06:46.736571  585330 out.go:309] Setting ErrFile to fd 2...
	I0914 19:06:46.736609  585330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:06:46.736889  585330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 19:06:46.737114  585330 out.go:303] Setting JSON to false
	I0914 19:06:46.737166  585330 mustload.go:65] Loading cluster: multinode-587826
	I0914 19:06:46.737301  585330 notify.go:220] Checking for updates...
	I0914 19:06:46.737614  585330 config.go:182] Loaded profile config "multinode-587826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 19:06:46.737633  585330 status.go:255] checking status of multinode-587826 ...
	I0914 19:06:46.738173  585330 cli_runner.go:164] Run: docker container inspect multinode-587826 --format={{.State.Status}}
	I0914 19:06:46.758168  585330 status.go:330] multinode-587826 host status = "Stopped" (err=<nil>)
	I0914 19:06:46.758191  585330 status.go:343] host is not running, skipping remaining checks
	I0914 19:06:46.758199  585330 status.go:257] multinode-587826 status: &{Name:multinode-587826 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:06:46.758223  585330 status.go:255] checking status of multinode-587826-m02 ...
	I0914 19:06:46.758530  585330 cli_runner.go:164] Run: docker container inspect multinode-587826-m02 --format={{.State.Status}}
	I0914 19:06:46.775829  585330 status.go:330] multinode-587826-m02 host status = "Stopped" (err=<nil>)
	I0914 19:06:46.775852  585330 status.go:343] host is not running, skipping remaining checks
	I0914 19:06:46.775860  585330 status.go:257] multinode-587826-m02 status: &{Name:multinode-587826-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-587826 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 19:07:27.174297  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-587826 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m21.435743475s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-587826 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-587826
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-587826-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-587826-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.539989ms)

                                                
                                                
-- stdout --
	* [multinode-587826-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-587826-m02' is duplicated with machine name 'multinode-587826-m02' in profile 'multinode-587826'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-587826-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-587826-m03 --driver=docker  --container-runtime=containerd: (29.915176589s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-587826
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-587826: exit status 80 (603.361157ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-587826
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-587826-m03 already exists in multinode-587826-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-587826-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-587826-m03: (1.977532013s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.66s)

                                                
                                    
x
+
TestPreload (184.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-498726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-498726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.534374746s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-498726 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-498726 image pull gcr.io/k8s-minikube/busybox: (1.395357798s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-498726
E0914 19:10:14.994663  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-498726: (12.068889932s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-498726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0914 19:11:04.127020  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:11:09.443967  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-498726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m23.943596103s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-498726 image list
helpers_test.go:175: Cleaning up "test-preload-498726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-498726
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-498726: (2.347440935s)
--- PASS: TestPreload (184.55s)

                                                
                                    
x
+
TestScheduledStopUnix (110.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-172802 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-172802 --memory=2048 --driver=docker  --container-runtime=containerd: (33.690208s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172802 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-172802 -n scheduled-stop-172802
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172802 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172802 --cancel-scheduled
E0914 19:12:32.490422  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172802 -n scheduled-stop-172802
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-172802
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172802 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-172802
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-172802: exit status 7 (86.121907ms)

                                                
                                                
-- stdout --
	scheduled-stop-172802
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172802 -n scheduled-stop-172802
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172802 -n scheduled-stop-172802: exit status 7 (70.376211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-172802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-172802
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-172802: (4.68890642s)
--- PASS: TestScheduledStopUnix (110.03s)

                                                
                                    
x
+
TestInsufficientStorage (10.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-359132 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-359132 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.253009304s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d9fc279-33fc-4ac8-aa80-afdd659e24cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-359132] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8779b3c-6b70-4f18-ab37-618a704561cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17217"}}
	{"specversion":"1.0","id":"995d3c6a-c657-4d3a-80a2-300c13ef3161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9733a83d-8ffe-4904-a954-8d52529ba44a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig"}}
	{"specversion":"1.0","id":"297cf3f5-744e-41e7-81a8-4aee8940cc28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube"}}
	{"specversion":"1.0","id":"3121652f-033e-4186-8b86-a63438a368b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c430c9ce-702a-4377-b504-2ef688524742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99e060bd-424b-4337-b53d-51a0e4d07ca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"009ec593-5766-4298-a55e-97e1dcb1d3e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d639bc0c-409f-42c8-bd46-7a682b48813e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ee219ec-2778-4662-b2f4-fbea5eeaa20c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"23faf86f-ee04-4a40-82a1-f71d9ff0e60c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-359132 in cluster insufficient-storage-359132","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2becdd45-f304-46c3-9c3d-10336fbd46ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a39c2f4-342f-4510-b092-b681c0282819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e739756-a580-4511-a477-5b6a7748d096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-359132 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-359132 --output=json --layout=cluster: exit status 7 (318.697284ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-359132","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-359132","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 19:13:48.727016  602871 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-359132" does not appear in /home/jenkins/minikube-integration/17217-492678/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-359132 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-359132 --output=json --layout=cluster: exit status 7 (298.597429ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-359132","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-359132","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 19:13:49.027772  602923 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-359132" does not appear in /home/jenkins/minikube-integration/17217-492678/kubeconfig
	E0914 19:13:49.040209  602923 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/insufficient-storage-359132/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-359132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-359132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-359132: (1.911136598s)
--- PASS: TestInsufficientStorage (10.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.2045290995.exe start -p running-upgrade-831893 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.2045290995.exe start -p running-upgrade-831893 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.291830677s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-831893 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-831893 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.451383845s)
helpers_test.go:175: Cleaning up "running-upgrade-831893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-831893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-831893: (3.037404734s)
--- PASS: TestRunningBinaryUpgrade (81.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.433672162s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-315050
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-315050: (1.456755002s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-315050 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-315050 status --format={{.Host}}: exit status 7 (88.193928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m51.413445738s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-315050 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (128.240572ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-315050] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-315050
	    minikube start -p kubernetes-upgrade-315050 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3150502 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-315050 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-315050 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.295998475s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-315050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-315050
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-315050: (2.499362992s)
--- PASS: TestKubernetesUpgrade (389.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.734505824.exe start -p missing-upgrade-558661 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.734505824.exe start -p missing-upgrade-558661 --memory=2200 --driver=docker  --container-runtime=containerd: (1m20.988568312s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-558661
E0914 19:15:14.994193  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-558661: (14.489151483s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-558661
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-558661 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0914 19:16:04.127672  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:16:09.444854  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-558661 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.667430768s)
helpers_test.go:175: Cleaning up "missing-upgrade-558661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-558661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-558661: (3.31989749s)
--- PASS: TestMissingContainerUpgrade (164.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-722000 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-722000 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (87.285224ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-722000] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-722000 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-722000 --driver=docker  --container-runtime=containerd: (43.446146844s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-722000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-722000 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-722000 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.143183495s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-722000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-722000 status -o json: exit status 2 (441.724102ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-722000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-722000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-722000: (1.967834579s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-722000 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-722000 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.796870499s)
--- PASS: TestNoKubernetes/serial/Start (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-722000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-722000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (504.826805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (6.07311862s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-722000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-722000: (1.253089606s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-722000 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-722000 --driver=docker  --container-runtime=containerd: (6.902904348s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-722000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-722000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.632854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.2703734731.exe start -p stopped-upgrade-554258 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0914 19:16:38.037905  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.2703734731.exe start -p stopped-upgrade-554258 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.633749085s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.2703734731.exe -p stopped-upgrade-554258 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.2703734731.exe -p stopped-upgrade-554258 stop: (1.283561773s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-554258 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-554258 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.285711516s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-554258
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-554258: (1.068864189s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestPause/serial/Start (56.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-335944 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0914 19:20:14.993907  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-335944 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.37841257s)
--- PASS: TestPause/serial/Start (56.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-335944 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-335944 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.567690612s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-335944 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-335944 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-335944 --output=json --layout=cluster: exit status 2 (356.863954ms)

                                                
                                                
-- stdout --
	{"Name":"pause-335944","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-335944","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-335944 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-335944 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-335944 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-335944 --alsologtostderr -v=5: (2.977110468s)
--- PASS: TestPause/serial/DeletePaused (2.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-335944
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-335944: exit status 1 (19.633466ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-335944: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-989259 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-989259 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (261.325527ms)

                                                
                                                
-- stdout --
	* [false-989259] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:21:35.341197  640669 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:21:35.341431  640669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:21:35.341457  640669 out.go:309] Setting ErrFile to fd 2...
	I0914 19:21:35.341476  640669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:21:35.341750  640669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-492678/.minikube/bin
	I0914 19:21:35.342220  640669 out.go:303] Setting JSON to false
	I0914 19:21:35.343417  640669 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18239,"bootTime":1694701057,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 19:21:35.343516  640669 start.go:138] virtualization:  
	I0914 19:21:35.346646  640669 out.go:177] * [false-989259] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 19:21:35.349577  640669 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 19:21:35.351774  640669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 19:21:35.349673  640669 notify.go:220] Checking for updates...
	I0914 19:21:35.353810  640669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-492678/kubeconfig
	I0914 19:21:35.355851  640669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-492678/.minikube
	I0914 19:21:35.357941  640669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 19:21:35.360104  640669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 19:21:35.362899  640669 config.go:182] Loaded profile config "kubernetes-upgrade-315050": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0914 19:21:35.363016  640669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 19:21:35.394106  640669 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 19:21:35.394205  640669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 19:21:35.520405  640669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-14 19:21:35.510681612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 19:21:35.520509  640669 docker.go:294] overlay module found
	I0914 19:21:35.524060  640669 out.go:177] * Using the docker driver based on user configuration
	I0914 19:21:35.526199  640669 start.go:298] selected driver: docker
	I0914 19:21:35.526219  640669 start.go:902] validating driver "docker" against <nil>
	I0914 19:21:35.526236  640669 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 19:21:35.528499  640669 out.go:177] 
	W0914 19:21:35.530465  640669 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0914 19:21:35.532523  640669 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-989259 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-989259" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 14 Sep 2023 19:21:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-315050
contexts:
- context:
cluster: kubernetes-upgrade-315050
extensions:
- extension:
last-update: Thu, 14 Sep 2023 19:21:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-315050
name: kubernetes-upgrade-315050
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-315050
user:
client-certificate: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kubernetes-upgrade-315050/client.crt
client-key: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kubernetes-upgrade-315050/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-989259

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989259"

                                                
                                                
----------------------- debugLogs end: false-989259 [took: 4.360481528s] --------------------------------
helpers_test.go:175: Cleaning up "false-989259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-989259
--- PASS: TestNetworkPlugins/group/false (4.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-320501 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0914 19:24:07.175143  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-320501 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m4.675674608s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-320501 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [591292e9-41c4-4755-a15a-66d19598c4ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 19:25:14.994001  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
helpers_test.go:344: "busybox" [591292e9-41c4-4755-a15a-66d19598c4ca] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.03170355s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-320501 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-320501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-320501 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-320501 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-320501 --alsologtostderr -v=3: (12.21710243s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-320501 -n old-k8s-version-320501
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-320501 -n old-k8s-version-320501: exit status 7 (73.880566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-320501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (667.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-320501 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-320501 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m7.376963627s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-320501 -n old-k8s-version-320501
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (667.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-629910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:26:04.127588  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:26:09.444688  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-629910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m22.207113079s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-629910 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4af3e134-bf44-4480-8b45-df78387eb665] Pending
helpers_test.go:344: "busybox" [4af3e134-bf44-4480-8b45-df78387eb665] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4af3e134-bf44-4480-8b45-df78387eb665] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.032688378s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-629910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-629910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-629910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087283185s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-629910 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-629910 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-629910 --alsologtostderr -v=3: (12.095258121s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-629910 -n no-preload-629910
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-629910 -n no-preload-629910: exit status 7 (81.862817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-629910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (341.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-629910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:29:12.490719  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
E0914 19:30:14.994097  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:31:04.127872  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:31:09.444697  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-629910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m41.327620215s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-629910 -n no-preload-629910
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (341.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tqnf4" [1a2a46f8-1f2e-460f-a051-0f0c24f12a86] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.031871179s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tqnf4" [1a2a46f8-1f2e-460f-a051-0f0c24f12a86] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011535822s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-629910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-629910 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-629910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-629910 -n no-preload-629910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-629910 -n no-preload-629910: exit status 2 (355.967041ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-629910 -n no-preload-629910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-629910 -n no-preload-629910: exit status 2 (343.351328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-629910 --alsologtostderr -v=1
E0914 19:33:18.038141  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-629910 -n no-preload-629910
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-629910 -n no-preload-629910
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-825584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-825584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (59.076273568s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-825584 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5334ce8d-adda-47fe-9c1e-eff2971c7dbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5334ce8d-adda-47fe-9c1e-eff2971c7dbb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.032311029s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-825584 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10915872s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-825584 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-825584 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-825584 --alsologtostderr -v=3: (12.152823531s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825584 -n embed-certs-825584
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825584 -n embed-certs-825584: exit status 7 (81.1273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-825584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (339.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-825584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:35:14.993259  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:36:04.127096  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:36:09.444685  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-825584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m38.932479763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825584 -n embed-certs-825584
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (339.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cr9kj" [9065c311-754a-4130-a46d-6068f153e508] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025096409s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cr9kj" [9065c311-754a-4130-a46d-6068f153e508] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011072s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-320501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-320501 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-320501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-320501 -n old-k8s-version-320501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-320501 -n old-k8s-version-320501: exit status 2 (367.352877ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-320501 -n old-k8s-version-320501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-320501 -n old-k8s-version-320501: exit status 2 (378.133988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-320501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-320501 -n old-k8s-version-320501
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-320501 -n old-k8s-version-320501
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-383620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:37:01.698345  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:37:02.978528  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:37:05.539742  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:37:10.659970  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:37:20.900414  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:37:41.380662  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-383620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m1.813662521s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-383620 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [311e0682-7b47-4a96-9e6a-c4b4bcf94a33] Pending
helpers_test.go:344: "busybox" [311e0682-7b47-4a96-9e6a-c4b4bcf94a33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [311e0682-7b47-4a96-9e6a-c4b4bcf94a33] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.031248163s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-383620 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-383620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-383620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.180365402s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-383620 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-383620 --alsologtostderr -v=3
E0914 19:38:22.341815  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-383620 --alsologtostderr -v=3: (12.161181736s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620: exit status 7 (77.090307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-383620 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (349.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-383620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:39:44.262657  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:40:12.712165  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:12.717463  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:12.727719  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:12.747988  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:12.788444  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:12.868704  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:13.029053  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:13.349786  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:13.990073  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:14.994143  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:40:15.270849  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:40:17.832716  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-383620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m48.908601877s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (349.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x286h" [3f05980e-2a7a-4044-9525-c37f9c3d6d82] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0914 19:40:22.953939  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x286h" [3f05980e-2a7a-4044-9525-c37f9c3d6d82] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.034778626s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x286h" [3f05980e-2a7a-4044-9525-c37f9c3d6d82] Running
E0914 19:40:33.194689  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010818392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-825584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-825584 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-825584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825584 -n embed-certs-825584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825584 -n embed-certs-825584: exit status 2 (362.250892ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-825584 -n embed-certs-825584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-825584 -n embed-certs-825584: exit status 2 (379.942202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-825584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825584 -n embed-certs-825584
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-825584 -n embed-certs-825584
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-555359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:40:47.175375  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:40:53.674962  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:41:04.127417  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
E0914 19:41:09.444760  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-555359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (46.68269824s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-555359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-555359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.399084023s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-555359 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-555359 --alsologtostderr -v=3: (1.316528675s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-555359 -n newest-cni-555359
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-555359 -n newest-cni-555359: exit status 7 (71.560901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-555359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-555359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1
E0914 19:41:34.635884  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:42:00.420696  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-555359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.1: (29.941710209s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-555359 -n newest-cni-555359
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-555359 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-555359 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-555359 -n newest-cni-555359
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-555359 -n newest-cni-555359: exit status 2 (366.037083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-555359 -n newest-cni-555359
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-555359 -n newest-cni-555359: exit status 2 (372.401655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-555359 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-555359 -n newest-cni-555359
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-555359 -n newest-cni-555359
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0914 19:42:28.103120  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/no-preload-629910/client.crt: no such file or directory
E0914 19:42:56.556227  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (57.532323974s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-989259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4qfcr" [b93383c3-4f05-4564-8212-b77d998e29e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4qfcr" [b93383c3-4f05-4564-8212-b77d998e29e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.011662084s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m8.184415469s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9d9fx" [4b330950-f8d7-4479-bf0a-8a33200062e9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9d9fx" [4b330950-f8d7-4479-bf0a-8a33200062e9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.03925688s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9d9fx" [4b330950-f8d7-4479-bf0a-8a33200062e9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011984649s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-383620 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-383620 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-383620 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-383620 --alsologtostderr -v=1: (1.340776937s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620: exit status 2 (462.61232ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620: exit status 2 (473.964832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-383620 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-383620 -n default-k8s-diff-port-383620
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.54s)
E0914 19:49:25.267596  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:49:30.381009  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:49:50.401802  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:50.407077  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:50.417415  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:50.437926  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:50.478247  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:50.558540  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:50.718919  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:51.039512  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:51.680477  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:52.961463  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:55.521630  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:49:58.038742  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory
E0914 19:50:00.641864  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:50:10.882095  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kindnet-989259/client.crt: no such file or directory
E0914 19:50:12.712439  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:50:14.993855  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/ingress-addon-legacy-480282/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.048233046s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zcnmf" [50faf112-9809-4257-a6ad-32ef67ea17fa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.029677528s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-989259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9z6r7" [3fa97648-3dd2-48d4-852e-629ddfcd0b17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9z6r7" [3fa97648-3dd2-48d4-852e-629ddfcd0b17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.013793409s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0914 19:45:40.396549  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/old-k8s-version-320501/client.crt: no such file or directory
E0914 19:45:52.491538  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.409631569s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5q5l8" [39dcc403-a969-4283-9f63-ce23a3d194f7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037651758s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-989259 replace --force -f testdata/netcat-deployment.yaml
E0914 19:46:04.127080  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/addons-531284/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xqrsp" [7a69ec10-e6d7-43e5-9575-21b82dd56361] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xqrsp" [7a69ec10-e6d7-43e5-9575-21b82dd56361] Running
E0914 19:46:09.444627  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/functional-759345/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.014578031s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-989259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c5bb7" [9b39f8b7-9bd9-4249-a253-26a4d110fcf5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c5bb7" [9b39f8b7-9bd9-4249-a253-26a4d110fcf5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010997657s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m33.662528725s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0914 19:48:03.347581  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.352918  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.363175  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.383374  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.423638  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.503934  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.664178  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:03.984351  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:04.624977  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:05.905415  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:08.458161  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:08.463438  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:08.465661  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
E0914 19:48:08.473777  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:08.494079  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:08.534398  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:08.614754  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:08.775223  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:09.095837  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:09.736544  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:11.017070  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:13.577893  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
E0914 19:48:13.586240  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m5.375597164s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-989259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z2gwb" [210640b8-8e98-45b0-ab54-b16bdc261c56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 19:48:18.698559  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z2gwb" [210640b8-8e98-45b0-ab54-b16bdc261c56] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.010616786s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ctfz8" [6cb07e7b-f57e-47b0-9b18-59d6f79133d2] Running
E0914 19:48:23.827149  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/default-k8s-diff-port-383620/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.025968399s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-989259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n6xrs" [de52ce87-a5f9-40d4-a518-505ac90f3bbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 19:48:28.939615  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-n6xrs" [de52ce87-a5f9-40d4-a518-505ac90f3bbe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.022374359s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0914 19:48:49.420426  498029 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/auto-989259/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-989259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m27.959651522s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-989259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-989259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kqt68" [8be424c6-e0dc-4a95-866f-ee9c58b0ece1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kqt68" [8be424c6-e0dc-4a95-866f-ee9c58b0ece1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010998862s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-989259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-989259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (28/303)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-294869 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-294869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-294869
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-753591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-753591
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-989259 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-989259" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 14 Sep 2023 19:21:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-315050
contexts:
- context:
cluster: kubernetes-upgrade-315050
extensions:
- extension:
last-update: Thu, 14 Sep 2023 19:21:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-315050
name: kubernetes-upgrade-315050
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-315050
user:
client-certificate: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kubernetes-upgrade-315050/client.crt
client-key: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kubernetes-upgrade-315050/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-989259

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989259"

                                                
                                                
----------------------- debugLogs end: kubenet-989259 [took: 3.702572559s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-989259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-989259
--- SKIP: TestNetworkPlugins/group/kubenet (3.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-989259 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-989259" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17217-492678/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 14 Sep 2023 19:21:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-315050
contexts:
- context:
cluster: kubernetes-upgrade-315050
extensions:
- extension:
last-update: Thu, 14 Sep 2023 19:21:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-315050
name: kubernetes-upgrade-315050
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-315050
user:
client-certificate: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kubernetes-upgrade-315050/client.crt
client-key: /home/jenkins/minikube-integration/17217-492678/.minikube/profiles/kubernetes-upgrade-315050/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-989259

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-989259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989259"

                                                
                                                
----------------------- debugLogs end: cilium-989259 [took: 6.372740497s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-989259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-989259
--- SKIP: TestNetworkPlugins/group/cilium (6.61s)

                                                
                                    
Copied to clipboard