Test Report: Docker_Linux_containerd_arm64 17363

                    
                      9401f4c578044658a0ecc50e70738aa1fc99eff9:2023-10-05:31314
                    
                

Test fail (8/307)

x
+
TestAddons/parallel/Ingress (38.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-223209 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-223209 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-223209 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [216282c1-634b-40d6-ba25-caf345e59b4b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [216282c1-634b-40d6-ba25-caf345e59b4b] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.013864361s
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-223209 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:295: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.06426155s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:297: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:301: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-223209 addons disable ingress-dns --alsologtostderr -v=1: (1.147885792s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-223209 addons disable ingress --alsologtostderr -v=1: (7.770331714s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-223209
helpers_test.go:235: (dbg) docker inspect addons-223209:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e",
	        "Created": "2023-10-05T21:01:00.758903804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1118869,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:01:01.095576047Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/hosts",
	        "LogPath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e-json.log",
	        "Name": "/addons-223209",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-223209:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-223209",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d-init/diff:/var/lib/docker/overlay2/0ac9dde3ffb5508a08f1d2d343ad7198828af6fb1810d9bf7c6479a8d59aaca8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-223209",
	                "Source": "/var/lib/docker/volumes/addons-223209/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-223209",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-223209",
	                "name.minikube.sigs.k8s.io": "addons-223209",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa1abcf8677c05b539804f8c24e8a6d9339b9d34885799dc4fc917d69d48f6da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34008"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34007"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34004"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34006"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34005"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aa1abcf8677c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-223209": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ed307c47b576",
	                        "addons-223209"
	                    ],
	                    "NetworkID": "e57d17fa4807df22d27e586abf820741faf5db521f740672ffc05b138f35425a",
	                    "EndpointID": "e898ac43b4f5f0ba7ca2c85824713d2544cc2616522a7c05a2cfa7433382fab8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-223209 -n addons-223209
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-223209 logs -n 25: (1.626175214s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | -p download-only-610377                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| delete  | -p download-only-610377                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| delete  | -p download-only-610377                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| start   | --download-only -p                                                                          | download-docker-853390 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | download-docker-853390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-853390                                                                   | download-docker-853390 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-750535   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | binary-mirror-750535                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36693                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-750535                                                                     | binary-mirror-750535   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| addons  | disable dashboard -p                                                                        | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-223209 --wait=true                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-223209 ssh cat                                                                       | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | /opt/local-path-provisioner/pvc-f6a4555f-aa36-48f9-875a-61866ab03538_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-223209 addons disable                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-223209 ip                                                                            | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	| addons  | addons-223209 addons disable                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC |                     |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | -p addons-223209                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| addons  | addons-223209 addons                                                                        | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-223209 ssh curl -s                                                                   | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-223209 ip                                                                            | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	| addons  | addons-223209 addons                                                                        | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:04 UTC | 05 Oct 23 21:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-223209 addons disable                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:04 UTC | 05 Oct 23 21:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-223209 addons disable                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:04 UTC | 05 Oct 23 21:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-223209 addons                                                                        | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:04 UTC | 05 Oct 23 21:04 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:00:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:00:37.008164 1118408 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:00:37.008539 1118408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:37.008574 1118408 out.go:309] Setting ErrFile to fd 2...
	I1005 21:00:37.008595 1118408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:37.008898 1118408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:00:37.009498 1118408 out.go:303] Setting JSON to false
	I1005 21:00:37.010701 1118408 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24183,"bootTime":1696515454,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:00:37.010854 1118408 start.go:138] virtualization:  
	I1005 21:00:37.014434 1118408 out.go:177] * [addons-223209] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:00:37.017590 1118408 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:00:37.017764 1118408 notify.go:220] Checking for updates...
	I1005 21:00:37.019995 1118408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:00:37.022564 1118408 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:00:37.024841 1118408 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:00:37.027548 1118408 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:00:37.029708 1118408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:00:37.032217 1118408 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:00:37.060471 1118408 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:00:37.060574 1118408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:37.140521 1118408 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:00:37.129978497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:37.140622 1118408 docker.go:294] overlay module found
	I1005 21:00:37.143010 1118408 out.go:177] * Using the docker driver based on user configuration
	I1005 21:00:37.145064 1118408 start.go:298] selected driver: docker
	I1005 21:00:37.145084 1118408 start.go:902] validating driver "docker" against <nil>
	I1005 21:00:37.145100 1118408 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:00:37.145795 1118408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:37.215225 1118408 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:00:37.205554357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:37.215389 1118408 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:00:37.215621 1118408 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 21:00:37.217783 1118408 out.go:177] * Using Docker driver with root privileges
	I1005 21:00:37.219840 1118408 cni.go:84] Creating CNI manager for ""
	I1005 21:00:37.219861 1118408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:00:37.219873 1118408 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:00:37.219894 1118408 start_flags.go:321] config:
	{Name:addons-223209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:00:37.222428 1118408 out.go:177] * Starting control plane node addons-223209 in cluster addons-223209
	I1005 21:00:37.224224 1118408 cache.go:122] Beginning downloading kic base image for docker with containerd
	I1005 21:00:37.225842 1118408 out.go:177] * Pulling base image ...
	I1005 21:00:37.227570 1118408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:00:37.227622 1118408 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4
	I1005 21:00:37.227635 1118408 cache.go:57] Caching tarball of preloaded images
	I1005 21:00:37.227661 1118408 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:00:37.227705 1118408 preload.go:174] Found /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1005 21:00:37.227715 1118408 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on containerd
	I1005 21:00:37.228096 1118408 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/config.json ...
	I1005 21:00:37.228122 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/config.json: {Name:mkf3266e30624d753f83a833e37134b9aadd9fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:00:37.245111 1118408 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:00:37.245193 1118408 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:00:37.245211 1118408 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 21:00:37.245216 1118408 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 21:00:37.245223 1118408 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:00:37.245228 1118408 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from local cache
	I1005 21:00:53.153479 1118408 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from cached tarball
	I1005 21:00:53.153518 1118408 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:00:53.153570 1118408 start.go:365] acquiring machines lock for addons-223209: {Name:mk0a6c99c13897b18be35158ba2129fcb313a3ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:00:53.154135 1118408 start.go:369] acquired machines lock for "addons-223209" in 537.991µs
	I1005 21:00:53.154174 1118408 start.go:93] Provisioning new machine with config: &{Name:addons-223209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 21:00:53.154269 1118408 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:00:53.156712 1118408 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1005 21:00:53.156983 1118408 start.go:159] libmachine.API.Create for "addons-223209" (driver="docker")
	I1005 21:00:53.157020 1118408 client.go:168] LocalClient.Create starting
	I1005 21:00:53.157144 1118408 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem
	I1005 21:00:54.714405 1118408 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem
	I1005 21:00:55.121915 1118408 cli_runner.go:164] Run: docker network inspect addons-223209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:00:55.144839 1118408 cli_runner.go:211] docker network inspect addons-223209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:00:55.144924 1118408 network_create.go:281] running [docker network inspect addons-223209] to gather additional debugging logs...
	I1005 21:00:55.144948 1118408 cli_runner.go:164] Run: docker network inspect addons-223209
	W1005 21:00:55.163003 1118408 cli_runner.go:211] docker network inspect addons-223209 returned with exit code 1
	I1005 21:00:55.163041 1118408 network_create.go:284] error running [docker network inspect addons-223209]: docker network inspect addons-223209: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-223209 not found
	I1005 21:00:55.163102 1118408 network_create.go:286] output of [docker network inspect addons-223209]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-223209 not found
	
	** /stderr **
	I1005 21:00:55.163235 1118408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:00:55.183161 1118408 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000c89300}
	I1005 21:00:55.183202 1118408 network_create.go:124] attempt to create docker network addons-223209 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 21:00:55.183265 1118408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-223209 addons-223209
	I1005 21:00:55.260203 1118408 network_create.go:108] docker network addons-223209 192.168.49.0/24 created
	I1005 21:00:55.260233 1118408 kic.go:117] calculated static IP "192.168.49.2" for the "addons-223209" container
	I1005 21:00:55.260307 1118408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:00:55.277362 1118408 cli_runner.go:164] Run: docker volume create addons-223209 --label name.minikube.sigs.k8s.io=addons-223209 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:00:55.296478 1118408 oci.go:103] Successfully created a docker volume addons-223209
	I1005 21:00:55.296568 1118408 cli_runner.go:164] Run: docker run --rm --name addons-223209-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-223209 --entrypoint /usr/bin/test -v addons-223209:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:00:56.476257 1118408 cli_runner.go:217] Completed: docker run --rm --name addons-223209-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-223209 --entrypoint /usr/bin/test -v addons-223209:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.179632722s)
	I1005 21:00:56.476285 1118408 oci.go:107] Successfully prepared a docker volume addons-223209
	I1005 21:00:56.476310 1118408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:00:56.476329 1118408 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:00:56.476413 1118408 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-223209:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:01:00.674853 1118408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-223209:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.198394173s)
	I1005 21:01:00.674886 1118408 kic.go:199] duration metric: took 4.198553 seconds to extract preloaded images to volume
	W1005 21:01:00.675033 1118408 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:01:00.675185 1118408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:01:00.742475 1118408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-223209 --name addons-223209 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-223209 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-223209 --network addons-223209 --ip 192.168.49.2 --volume addons-223209:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:01:01.105229 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Running}}
	I1005 21:01:01.128836 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:01.157645 1118408 cli_runner.go:164] Run: docker exec addons-223209 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:01:01.246399 1118408 oci.go:144] the created container "addons-223209" has a running status.
	I1005 21:01:01.246425 1118408 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa...
	I1005 21:01:02.294283 1118408 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:01:02.331511 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:02.357146 1118408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:01:02.357171 1118408 kic_runner.go:114] Args: [docker exec --privileged addons-223209 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:01:02.428641 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:02.450897 1118408 machine.go:88] provisioning docker machine ...
	I1005 21:01:02.450935 1118408 ubuntu.go:169] provisioning hostname "addons-223209"
	I1005 21:01:02.451009 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:02.475209 1118408 main.go:141] libmachine: Using SSH client type: native
	I1005 21:01:02.475641 1118408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34008 <nil> <nil>}
	I1005 21:01:02.475658 1118408 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-223209 && echo "addons-223209" | sudo tee /etc/hostname
	I1005 21:01:02.623205 1118408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-223209
	
	I1005 21:01:02.623280 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:02.642355 1118408 main.go:141] libmachine: Using SSH client type: native
	I1005 21:01:02.642771 1118408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34008 <nil> <nil>}
	I1005 21:01:02.642796 1118408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-223209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-223209/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-223209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:01:02.772371 1118408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:01:02.772403 1118408 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1112519/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1112519/.minikube}
	I1005 21:01:02.772425 1118408 ubuntu.go:177] setting up certificates
	I1005 21:01:02.772434 1118408 provision.go:83] configureAuth start
	I1005 21:01:02.772500 1118408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-223209
	I1005 21:01:02.790149 1118408 provision.go:138] copyHostCerts
	I1005 21:01:02.790236 1118408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.pem (1082 bytes)
	I1005 21:01:02.790365 1118408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/cert.pem (1123 bytes)
	I1005 21:01:02.790426 1118408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/key.pem (1675 bytes)
	I1005 21:01:02.790475 1118408 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem org=jenkins.addons-223209 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-223209]
	I1005 21:01:03.152035 1118408 provision.go:172] copyRemoteCerts
	I1005 21:01:03.152133 1118408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:01:03.152184 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.171105 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.269984 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:01:03.298018 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1005 21:01:03.326087 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:01:03.354370 1118408 provision.go:86] duration metric: configureAuth took 581.913688ms
	I1005 21:01:03.354398 1118408 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:01:03.354597 1118408 config.go:182] Loaded profile config "addons-223209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:01:03.354611 1118408 machine.go:91] provisioned docker machine in 903.694075ms
	I1005 21:01:03.354618 1118408 client.go:171] LocalClient.Create took 10.197589343s
	I1005 21:01:03.354638 1118408 start.go:167] duration metric: libmachine.API.Create for "addons-223209" took 10.197658511s
	I1005 21:01:03.354651 1118408 start.go:300] post-start starting for "addons-223209" (driver="docker")
	I1005 21:01:03.354660 1118408 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:01:03.354718 1118408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:01:03.354765 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.372999 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.470210 1118408 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:01:03.474605 1118408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:01:03.474640 1118408 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:01:03.474651 1118408 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:01:03.474658 1118408 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:01:03.474668 1118408 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1112519/.minikube/addons for local assets ...
	I1005 21:01:03.474737 1118408 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1112519/.minikube/files for local assets ...
	I1005 21:01:03.474760 1118408 start.go:303] post-start completed in 120.102758ms
	I1005 21:01:03.475100 1118408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-223209
	I1005 21:01:03.492917 1118408 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/config.json ...
	I1005 21:01:03.493190 1118408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:01:03.493232 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.511922 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.605199 1118408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:01:03.611128 1118408 start.go:128] duration metric: createHost completed in 10.456840901s
	I1005 21:01:03.611149 1118408 start.go:83] releasing machines lock for "addons-223209", held for 10.456996159s
	I1005 21:01:03.611226 1118408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-223209
	I1005 21:01:03.629257 1118408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:01:03.629380 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.629458 1118408 ssh_runner.go:195] Run: cat /version.json
	I1005 21:01:03.629501 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.650093 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.665118 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.878031 1118408 ssh_runner.go:195] Run: systemctl --version
	I1005 21:01:03.883502 1118408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:01:03.889018 1118408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 21:01:03.920558 1118408 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:01:03.920638 1118408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:01:03.956949 1118408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:01:03.956974 1118408 start.go:469] detecting cgroup driver to use...
	I1005 21:01:03.957005 1118408 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:01:03.957065 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1005 21:01:03.971716 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 21:01:03.985426 1118408 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:01:03.985498 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:01:04.002119 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:01:04.021123 1118408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:01:04.120753 1118408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:01:04.223016 1118408 docker.go:213] disabling docker service ...
	I1005 21:01:04.223116 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:01:04.244909 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:01:04.259634 1118408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:01:04.358411 1118408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:01:04.454973 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:01:04.468547 1118408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:01:04.489563 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1005 21:01:04.501550 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 21:01:04.513986 1118408 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 21:01:04.514082 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 21:01:04.526022 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 21:01:04.537672 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 21:01:04.549376 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 21:01:04.561362 1118408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:01:04.572351 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 21:01:04.583860 1118408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:01:04.594270 1118408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:01:04.605470 1118408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:01:04.693504 1118408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 21:01:04.844434 1118408 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I1005 21:01:04.844590 1118408 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1005 21:01:04.849531 1118408 start.go:537] Will wait 60s for crictl version
	I1005 21:01:04.849644 1118408 ssh_runner.go:195] Run: which crictl
	I1005 21:01:04.854315 1118408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:01:04.897907 1118408 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1005 21:01:04.897994 1118408 ssh_runner.go:195] Run: containerd --version
	I1005 21:01:04.927978 1118408 ssh_runner.go:195] Run: containerd --version
	I1005 21:01:04.968047 1118408 out.go:177] * Preparing Kubernetes v1.28.2 on containerd 1.6.24 ...
	I1005 21:01:04.970607 1118408 cli_runner.go:164] Run: docker network inspect addons-223209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:01:04.988202 1118408 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 21:01:04.993024 1118408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:01:05.009006 1118408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:01:05.009094 1118408 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:01:05.059662 1118408 containerd.go:604] all images are preloaded for containerd runtime.
	I1005 21:01:05.059689 1118408 containerd.go:518] Images already preloaded, skipping extraction
	I1005 21:01:05.059750 1118408 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:01:05.102540 1118408 containerd.go:604] all images are preloaded for containerd runtime.
	I1005 21:01:05.102566 1118408 cache_images.go:84] Images are preloaded, skipping loading
	I1005 21:01:05.102633 1118408 ssh_runner.go:195] Run: sudo crictl info
	I1005 21:01:05.146002 1118408 cni.go:84] Creating CNI manager for ""
	I1005 21:01:05.146028 1118408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:01:05.146059 1118408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:01:05.146079 1118408 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-223209 NodeName:addons-223209 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 21:01:05.146214 1118408 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-223209"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:01:05.146291 1118408 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-223209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:01:05.146367 1118408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 21:01:05.158327 1118408 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:01:05.158416 1118408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:01:05.169757 1118408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1005 21:01:05.191831 1118408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 21:01:05.213599 1118408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1005 21:01:05.234866 1118408 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:01:05.239368 1118408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:01:05.252840 1118408 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209 for IP: 192.168.49.2
	I1005 21:01:05.252870 1118408 certs.go:190] acquiring lock for shared ca certs: {Name:mkf0b25ffbb252c0d3d05e76f2fd0942f3acc421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:05.253006 1118408 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key
	I1005 21:01:05.462536 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt ...
	I1005 21:01:05.462569 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt: {Name:mk59ad5af18c1957a1db1754f40aab717d69629f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:05.463146 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key ...
	I1005 21:01:05.463163 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key: {Name:mk819a95ec4daa166ffab18d1a533d72044e25b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:05.463258 1118408 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key
	I1005 21:01:06.088788 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt ...
	I1005 21:01:06.088824 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt: {Name:mk4a245c6fd8fe7e8d5596a403fd1394a84fb238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.089020 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key ...
	I1005 21:01:06.089034 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key: {Name:mk8bd30f29e81a733aa84014449b7f2a9f5439d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.089661 1118408 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.key
	I1005 21:01:06.089681 1118408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt with IP's: []
	I1005 21:01:06.644963 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt ...
	I1005 21:01:06.644994 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: {Name:mk33908b56a3fba0a4f5f6165ee76f8f0b8c55f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.645628 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.key ...
	I1005 21:01:06.645644 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.key: {Name:mk6363ec5fc3f67b793f468d057224efc1831281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.645732 1118408 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2
	I1005 21:01:06.645750 1118408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 21:01:06.938798 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2 ...
	I1005 21:01:06.938830 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2: {Name:mke1fcf1c35a86f9eb294b89b16cb7efa5018505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.939012 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2 ...
	I1005 21:01:06.939025 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2: {Name:mk751ae2204d770bc6bdddd2ae20ed01a62e0e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.939611 1118408 certs.go:337] copying /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt
	I1005 21:01:06.939693 1118408 certs.go:341] copying /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key
	I1005 21:01:06.939744 1118408 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key
	I1005 21:01:06.939763 1118408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt with IP's: []
	I1005 21:01:07.473102 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt ...
	I1005 21:01:07.473136 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt: {Name:mk34a90bb6929e42abf5c72755987f5b87e923e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:07.473851 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key ...
	I1005 21:01:07.473875 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key: {Name:mk87ae58d2de8029c6830915bb71bc16bb867266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:07.474498 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:01:07.474907 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:01:07.474984 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:01:07.475026 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem (1675 bytes)
	I1005 21:01:07.476014 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:01:07.507745 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 21:01:07.537700 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:01:07.566810 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1005 21:01:07.595809 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:01:07.624205 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 21:01:07.652008 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:01:07.679606 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:01:07.707586 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:01:07.735876 1118408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:01:07.756222 1118408 ssh_runner.go:195] Run: openssl version
	I1005 21:01:07.763418 1118408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:01:07.774840 1118408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:01:07.779579 1118408 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:01:07.779665 1118408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:01:07.788001 1118408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:01:07.799592 1118408 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:01:07.803956 1118408 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:01:07.804005 1118408 kubeadm.go:404] StartCluster: {Name:addons-223209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:01:07.804084 1118408 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1005 21:01:07.804141 1118408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:01:07.847520 1118408 cri.go:89] found id: ""
	I1005 21:01:07.847621 1118408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:01:07.858170 1118408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:01:07.868889 1118408 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 21:01:07.868982 1118408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:01:07.879730 1118408 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:01:07.879774 1118408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 21:01:07.931385 1118408 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 21:01:07.931605 1118408 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 21:01:07.977114 1118408 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:01:07.977246 1118408 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:01:07.977306 1118408 kubeadm.go:322] OS: Linux
	I1005 21:01:07.977377 1118408 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 21:01:07.977457 1118408 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 21:01:07.977535 1118408 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 21:01:07.977610 1118408 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 21:01:07.977685 1118408 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 21:01:07.977763 1118408 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 21:01:07.977836 1118408 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1005 21:01:07.977910 1118408 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1005 21:01:07.977982 1118408 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1005 21:01:08.065799 1118408 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:01:08.065965 1118408 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:01:08.066094 1118408 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:01:08.324870 1118408 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:01:08.329444 1118408 out.go:204]   - Generating certificates and keys ...
	I1005 21:01:08.329628 1118408 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 21:01:08.329694 1118408 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 21:01:08.640101 1118408 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:01:08.977292 1118408 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:01:09.629905 1118408 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 21:01:09.836211 1118408 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 21:01:10.188692 1118408 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 21:01:10.189084 1118408 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-223209 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:01:10.630334 1118408 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 21:01:10.630711 1118408 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-223209 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:01:10.934508 1118408 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:01:11.207192 1118408 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:01:11.683150 1118408 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 21:01:11.683450 1118408 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:01:12.196784 1118408 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:01:12.767271 1118408 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:01:13.468096 1118408 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:01:14.142981 1118408 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:01:14.143852 1118408 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:01:14.146665 1118408 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:01:14.149492 1118408 out.go:204]   - Booting up control plane ...
	I1005 21:01:14.149633 1118408 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:01:14.149707 1118408 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:01:14.150177 1118408 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:01:14.164883 1118408 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:01:14.165692 1118408 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:01:14.166024 1118408 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 21:01:14.271767 1118408 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:01:22.279875 1118408 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.007244 seconds
	I1005 21:01:22.280489 1118408 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:01:22.297535 1118408 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:01:22.826977 1118408 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:01:22.827186 1118408 kubeadm.go:322] [mark-control-plane] Marking the node addons-223209 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 21:01:23.340662 1118408 kubeadm.go:322] [bootstrap-token] Using token: 0g8b6d.se6cruugf1au57p3
	I1005 21:01:23.343075 1118408 out.go:204]   - Configuring RBAC rules ...
	I1005 21:01:23.343195 1118408 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:01:23.349269 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:01:23.361040 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:01:23.365127 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:01:23.371850 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:01:23.377366 1118408 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:01:23.393413 1118408 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:01:23.649105 1118408 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 21:01:23.762350 1118408 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 21:01:23.765083 1118408 kubeadm.go:322] 
	I1005 21:01:23.765153 1118408 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 21:01:23.765160 1118408 kubeadm.go:322] 
	I1005 21:01:23.765232 1118408 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 21:01:23.765237 1118408 kubeadm.go:322] 
	I1005 21:01:23.765261 1118408 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 21:01:23.765316 1118408 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:01:23.765364 1118408 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:01:23.765369 1118408 kubeadm.go:322] 
	I1005 21:01:23.765419 1118408 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 21:01:23.765424 1118408 kubeadm.go:322] 
	I1005 21:01:23.765468 1118408 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 21:01:23.765473 1118408 kubeadm.go:322] 
	I1005 21:01:23.765522 1118408 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 21:01:23.765591 1118408 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:01:23.765655 1118408 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:01:23.765660 1118408 kubeadm.go:322] 
	I1005 21:01:23.766026 1118408 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:01:23.766170 1118408 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 21:01:23.766193 1118408 kubeadm.go:322] 
	I1005 21:01:23.766327 1118408 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0g8b6d.se6cruugf1au57p3 \
	I1005 21:01:23.766578 1118408 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:571092fde05632971def08ad2a457b2fd089790ef449e849065ad5827b1ed47e \
	I1005 21:01:23.766606 1118408 kubeadm.go:322] 	--control-plane 
	I1005 21:01:23.766616 1118408 kubeadm.go:322] 
	I1005 21:01:23.766703 1118408 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:01:23.766707 1118408 kubeadm.go:322] 
	I1005 21:01:23.766791 1118408 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0g8b6d.se6cruugf1au57p3 \
	I1005 21:01:23.766895 1118408 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:571092fde05632971def08ad2a457b2fd089790ef449e849065ad5827b1ed47e 
	I1005 21:01:23.769659 1118408 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:01:23.769769 1118408 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:01:23.769783 1118408 cni.go:84] Creating CNI manager for ""
	I1005 21:01:23.769790 1118408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:01:23.772274 1118408 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:01:23.774187 1118408 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:01:23.780468 1118408 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 21:01:23.780485 1118408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:01:23.809024 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:01:24.765978 1118408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:01:24.766113 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:24.766205 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=addons-223209 minikube.k8s.io/updated_at=2023_10_05T21_01_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:24.914428 1118408 ops.go:34] apiserver oom_adj: -16
	I1005 21:01:24.914514 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:25.061931 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:25.689978 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:26.189907 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:26.690408 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:27.190473 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:27.690489 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:28.189613 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:28.689644 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:29.189944 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:29.689530 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:30.190418 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:30.689622 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:31.190210 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:31.689964 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:32.189603 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:32.690592 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:33.190011 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:33.689801 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:34.190399 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:34.690136 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:35.189618 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:35.690027 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:36.190494 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:36.359181 1118408 kubeadm.go:1081] duration metric: took 11.593113966s to wait for elevateKubeSystemPrivileges.
	I1005 21:01:36.359206 1118408 kubeadm.go:406] StartCluster complete in 28.555205013s
	I1005 21:01:36.359223 1118408 settings.go:142] acquiring lock: {Name:mk8ac06a875c8ddea9ee6a3c248c409c1d3f301d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:36.359714 1118408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:01:36.360108 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/kubeconfig: {Name:mk4151b883e566a83b3cbe0bf9e01957efa61f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:36.362433 1118408 config.go:182] Loaded profile config "addons-223209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:01:36.362478 1118408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:01:36.362682 1118408 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1005 21:01:36.362823 1118408 addons.go:69] Setting volumesnapshots=true in profile "addons-223209"
	I1005 21:01:36.362838 1118408 addons.go:231] Setting addon volumesnapshots=true in "addons-223209"
	I1005 21:01:36.362875 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.363362 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.364104 1118408 addons.go:69] Setting ingress-dns=true in profile "addons-223209"
	I1005 21:01:36.364172 1118408 addons.go:231] Setting addon ingress-dns=true in "addons-223209"
	I1005 21:01:36.364255 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.364726 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.365083 1118408 addons.go:69] Setting inspektor-gadget=true in profile "addons-223209"
	I1005 21:01:36.365102 1118408 addons.go:231] Setting addon inspektor-gadget=true in "addons-223209"
	I1005 21:01:36.365133 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.365569 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.365731 1118408 addons.go:69] Setting cloud-spanner=true in profile "addons-223209"
	I1005 21:01:36.365761 1118408 addons.go:231] Setting addon cloud-spanner=true in "addons-223209"
	I1005 21:01:36.365803 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.366197 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.366505 1118408 addons.go:69] Setting metrics-server=true in profile "addons-223209"
	I1005 21:01:36.366524 1118408 addons.go:231] Setting addon metrics-server=true in "addons-223209"
	I1005 21:01:36.366554 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.366937 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.370738 1118408 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-223209"
	I1005 21:01:36.370803 1118408 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-223209"
	I1005 21:01:36.370844 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.371355 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.371872 1118408 addons.go:69] Setting registry=true in profile "addons-223209"
	I1005 21:01:36.371891 1118408 addons.go:231] Setting addon registry=true in "addons-223209"
	I1005 21:01:36.371923 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.372315 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.387219 1118408 addons.go:69] Setting default-storageclass=true in profile "addons-223209"
	I1005 21:01:36.387252 1118408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-223209"
	I1005 21:01:36.387566 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.391232 1118408 addons.go:69] Setting storage-provisioner=true in profile "addons-223209"
	I1005 21:01:36.391265 1118408 addons.go:231] Setting addon storage-provisioner=true in "addons-223209"
	I1005 21:01:36.391311 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.391756 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.407148 1118408 addons.go:69] Setting gcp-auth=true in profile "addons-223209"
	I1005 21:01:36.407188 1118408 mustload.go:65] Loading cluster: addons-223209
	I1005 21:01:36.407401 1118408 config.go:182] Loaded profile config "addons-223209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:01:36.407649 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.407789 1118408 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-223209"
	I1005 21:01:36.407803 1118408 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-223209"
	I1005 21:01:36.408038 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.426208 1118408 addons.go:69] Setting ingress=true in profile "addons-223209"
	I1005 21:01:36.426242 1118408 addons.go:231] Setting addon ingress=true in "addons-223209"
	I1005 21:01:36.426297 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.426745 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.577490 1118408 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1005 21:01:36.591855 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1005 21:01:36.591873 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1005 21:01:36.591931 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.612377 1118408 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1005 21:01:36.619825 1118408 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1005 21:01:36.619902 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1005 21:01:36.620131 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.629790 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1005 21:01:36.641210 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1005 21:01:36.643612 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1005 21:01:36.647175 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1005 21:01:36.650031 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1005 21:01:36.652805 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1005 21:01:36.655021 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1005 21:01:36.655517 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.655529 1118408 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1005 21:01:36.663147 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 21:01:36.663177 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 21:01:36.663244 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.658502 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1005 21:01:36.665601 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1005 21:01:36.668371 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1005 21:01:36.668388 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1005 21:01:36.668451 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.693609 1118408 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-223209"
	I1005 21:01:36.693651 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.694083 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.655633 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1005 21:01:36.698472 1118408 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 21:01:36.698493 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1005 21:01:36.698580 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.713203 1118408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 21:01:36.655624 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1005 21:01:36.720413 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1005 21:01:36.720498 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.738927 1118408 addons.go:231] Setting addon default-storageclass=true in "addons-223209"
	I1005 21:01:36.738967 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.739457 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.750722 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1005 21:01:36.744731 1118408 out.go:177]   - Using image docker.io/registry:2.8.1
	I1005 21:01:36.744772 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.745936 1118408 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-223209" context rescaled to 1 replicas
	I1005 21:01:36.755210 1118408 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 21:01:36.757912 1118408 out.go:177] * Verifying Kubernetes components...
	I1005 21:01:36.763415 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.765378 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:01:36.767615 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:01:36.765564 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:01:36.773659 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:01:36.771686 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1005 21:01:36.771928 1118408 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 21:01:36.776907 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1005 21:01:36.776978 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.777228 1118408 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:01:36.777245 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 21:01:36.777293 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.798098 1118408 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1005 21:01:36.798120 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1005 21:01:36.798193 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.833389 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.874458 1118408 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1005 21:01:36.877390 1118408 out.go:177]   - Using image docker.io/busybox:stable
	I1005 21:01:36.882973 1118408 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 21:01:36.882994 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1005 21:01:36.883072 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.881850 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.920172 1118408 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 21:01:36.920200 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 21:01:36.920270 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.934856 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.981030 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.008765 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.029710 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.030795 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.038002 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.053603 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.337613 1118408 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1005 21:01:37.337686 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1005 21:01:37.508067 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1005 21:01:37.568335 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 21:01:37.595932 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 21:01:37.596005 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1005 21:01:37.598971 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1005 21:01:37.599036 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1005 21:01:37.616986 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 21:01:37.682349 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1005 21:01:37.682374 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1005 21:01:37.683595 1118408 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1005 21:01:37.683652 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1005 21:01:37.685716 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 21:01:37.806148 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:01:37.822670 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 21:01:37.822742 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 21:01:37.858693 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1005 21:01:37.858853 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1005 21:01:37.871263 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 21:01:37.908368 1118408 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1005 21:01:37.908438 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1005 21:01:37.947232 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1005 21:01:37.947309 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1005 21:01:37.980680 1118408 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1005 21:01:37.980750 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1005 21:01:38.098365 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1005 21:01:38.098393 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1005 21:01:38.144679 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1005 21:01:38.144707 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1005 21:01:38.187017 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1005 21:01:38.187042 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1005 21:01:38.221602 1118408 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1005 21:01:38.221624 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1005 21:01:38.236159 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 21:01:38.236180 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 21:01:38.317937 1118408 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:01:38.318021 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1005 21:01:38.349310 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1005 21:01:38.349378 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1005 21:01:38.355301 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1005 21:01:38.355378 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1005 21:01:38.429199 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1005 21:01:38.456430 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 21:01:38.506280 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1005 21:01:38.506357 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1005 21:01:38.548052 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:01:38.612907 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1005 21:01:38.612934 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1005 21:01:38.670985 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1005 21:01:38.671185 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1005 21:01:38.809676 1118408 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.037722259s)
	I1005 21:01:38.809913 1118408 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.096683144s)
	I1005 21:01:38.809955 1118408 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 21:01:38.810873 1118408 node_ready.go:35] waiting up to 6m0s for node "addons-223209" to be "Ready" ...
	I1005 21:01:38.814615 1118408 node_ready.go:49] node "addons-223209" has status "Ready":"True"
	I1005 21:01:38.814683 1118408 node_ready.go:38] duration metric: took 3.745886ms waiting for node "addons-223209" to be "Ready" ...
	I1005 21:01:38.814709 1118408 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:01:38.823906 1118408 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace to be "Ready" ...
	I1005 21:01:38.913863 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1005 21:01:38.913890 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1005 21:01:38.923751 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 21:01:38.923819 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1005 21:01:39.048657 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 21:01:39.129356 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1005 21:01:39.129386 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1005 21:01:39.312699 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1005 21:01:39.312732 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1005 21:01:39.535789 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1005 21:01:39.535814 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1005 21:01:39.712824 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 21:01:39.712897 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1005 21:01:39.923734 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 21:01:40.055378 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.547224323s)
	I1005 21:01:40.844978 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:41.328705 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.642928418s)
	I1005 21:01:41.328804 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.711597416s)
	I1005 21:01:41.328896 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.760487327s)
	W1005 21:01:41.346675 1118408 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1005 21:01:41.417412 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.611183244s)
	I1005 21:01:42.848310 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:43.336903 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.465561281s)
	I1005 21:01:43.337008 1118408 addons.go:467] Verifying addon ingress=true in "addons-223209"
	I1005 21:01:43.339331 1118408 out.go:177] * Verifying ingress addon...
	I1005 21:01:43.337214 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.907942357s)
	I1005 21:01:43.337275 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.880771223s)
	I1005 21:01:43.337354 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.789229164s)
	I1005 21:01:43.337412 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.288680993s)
	I1005 21:01:43.342821 1118408 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1005 21:01:43.339456 1118408 addons.go:467] Verifying addon registry=true in "addons-223209"
	I1005 21:01:43.339470 1118408 addons.go:467] Verifying addon metrics-server=true in "addons-223209"
	W1005 21:01:43.339500 1118408 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 21:01:43.345261 1118408 out.go:177] * Verifying registry addon...
	I1005 21:01:43.347949 1118408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1005 21:01:43.345427 1118408 retry.go:31] will retry after 161.926384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 21:01:43.349132 1118408 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1005 21:01:43.349147 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:43.356943 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:43.358294 1118408 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 21:01:43.358352 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:43.370646 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:43.479642 1118408 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1005 21:01:43.479718 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:43.510098 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:01:43.524479 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:43.867952 1118408 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1005 21:01:43.875254 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:43.889488 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:44.132753 1118408 addons.go:231] Setting addon gcp-auth=true in "addons-223209"
	I1005 21:01:44.132809 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:44.133332 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:44.164985 1118408 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1005 21:01:44.165063 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:44.214398 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:44.386777 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:44.397814 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:44.865081 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:44.886088 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:45.384678 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:45.398846 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:45.400439 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:45.485213 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.561372523s)
	I1005 21:01:45.485264 1118408 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-223209"
	I1005 21:01:45.487857 1118408 out.go:177] * Verifying csi-hostpath-driver addon...
	I1005 21:01:45.490930 1118408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1005 21:01:45.502098 1118408 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 21:01:45.502175 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:45.510223 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:45.596461 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.086310618s)
	I1005 21:01:45.596538 1118408 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.431530888s)
	I1005 21:01:45.599708 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:01:45.601672 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1005 21:01:45.603886 1118408 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1005 21:01:45.603915 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1005 21:01:45.633653 1118408 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1005 21:01:45.633684 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1005 21:01:45.667101 1118408 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 21:01:45.667127 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1005 21:01:45.698857 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 21:01:45.862823 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:45.876574 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:46.018262 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:46.361511 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:46.378050 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:46.523758 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:46.601894 1118408 addons.go:467] Verifying addon gcp-auth=true in "addons-223209"
	I1005 21:01:46.604596 1118408 out.go:177] * Verifying gcp-auth addon...
	I1005 21:01:46.607809 1118408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1005 21:01:46.611215 1118408 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1005 21:01:46.611284 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:46.614058 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:46.861807 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:46.875839 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:47.016940 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:47.118867 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:47.362721 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:47.375196 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:47.516436 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:47.617852 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:47.843209 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:47.862007 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:47.875732 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:48.016978 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:48.118036 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:48.362143 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:48.376394 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:48.516673 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:48.618678 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:48.862067 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:48.876185 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:49.016508 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:49.126863 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:49.362678 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:49.376674 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:49.516582 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:49.617775 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:49.843887 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:49.862633 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:49.876311 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:50.018365 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:50.118860 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:50.363529 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:50.376799 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:50.517163 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:50.618035 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:50.862040 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:50.876066 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:51.017991 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:51.118151 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:51.369365 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:51.380458 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:51.516942 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:51.618971 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:51.844360 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:51.863080 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:51.875962 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:52.016896 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:52.118294 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:52.361324 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:52.376085 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:52.517191 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:52.621066 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:52.865046 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:52.876029 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:53.017585 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:53.118639 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:53.362441 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:53.376705 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:53.516963 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:53.617654 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:53.844711 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:53.865650 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:53.875729 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:54.017555 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:54.118743 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:54.362287 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:54.375772 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:54.516467 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:54.618397 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:54.861689 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:54.876251 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:55.017440 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:55.118553 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:55.361726 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:55.375682 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:55.516524 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:55.618504 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:55.844883 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:55.861202 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:55.877463 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:56.016840 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:56.118352 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:56.361770 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:56.375222 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:56.516591 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:56.618808 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:56.862977 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:56.876095 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:57.017599 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:57.118831 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:57.362394 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:57.375784 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:57.515914 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:57.618677 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:57.863216 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:57.881856 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:58.017169 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:58.117517 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:58.343180 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:58.371362 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:58.375434 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:58.515532 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:58.617838 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:58.861628 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:58.875138 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:59.015965 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:59.118179 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:59.361940 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:59.375717 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:59.516570 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:59.617807 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:59.861184 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:59.875777 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:00.017140 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:00.120021 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:00.361259 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:00.375714 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:00.516056 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:00.617864 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:00.843286 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:00.861261 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:00.875672 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:01.016945 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:01.118409 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:01.361039 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:01.375536 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:01.516951 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:01.618287 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:01.861723 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:01.875361 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:02.016443 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:02.118072 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:02.362325 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:02.376049 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:02.516613 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:02.618002 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:02.861939 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:02.878982 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:03.023104 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:03.120603 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:03.343711 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:03.362046 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:03.375675 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:03.516162 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:03.618460 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:03.862364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:03.876489 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:04.016812 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:04.118276 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:04.362447 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:04.376247 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:04.516154 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:04.618654 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:04.861882 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:04.877238 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:05.016574 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:05.120849 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:05.362533 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:05.376711 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:05.516968 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:05.618252 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:05.843987 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:05.861541 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:05.876533 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:06.016783 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:06.118192 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:06.361469 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:06.376105 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:06.516355 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:06.618659 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:06.861276 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:06.875644 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:07.016266 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:07.118036 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:07.362331 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:07.375989 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:07.516745 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:07.617983 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:07.862392 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:07.878707 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:08.016585 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:08.117883 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:08.343355 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:08.361845 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:08.375470 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:08.516378 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:08.617913 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:08.861503 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:08.876436 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:09.016874 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:09.118426 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:09.361363 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:09.376105 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:09.516158 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:09.618020 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:09.862983 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:09.875998 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:10.017642 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:10.118977 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:10.344181 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:10.361451 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:10.376024 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:10.515645 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:10.618089 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:10.861753 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:10.875609 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:11.015956 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:11.118620 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:11.361600 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:11.375310 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:11.515932 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:11.618600 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:11.862306 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:11.875932 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:12.021264 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:12.119411 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:12.362191 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:12.375888 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:12.526034 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:12.617653 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:12.843343 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:12.861757 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:12.879250 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:13.016091 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:13.118084 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:13.342801 1118408 pod_ready.go:92] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.342827 1118408 pod_ready.go:81] duration metric: took 34.518843235s waiting for pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.342839 1118408 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.348214 1118408 pod_ready.go:92] pod "etcd-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.348247 1118408 pod_ready.go:81] duration metric: took 5.399006ms waiting for pod "etcd-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.348262 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.354694 1118408 pod_ready.go:92] pod "kube-apiserver-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.354720 1118408 pod_ready.go:81] duration metric: took 6.448625ms waiting for pod "kube-apiserver-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.354732 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.361131 1118408 pod_ready.go:92] pod "kube-controller-manager-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.361160 1118408 pod_ready.go:81] duration metric: took 6.417372ms waiting for pod "kube-controller-manager-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.361173 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gksxp" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.363343 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:13.368479 1118408 pod_ready.go:92] pod "kube-proxy-gksxp" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.368504 1118408 pod_ready.go:81] duration metric: took 7.323459ms waiting for pod "kube-proxy-gksxp" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.368515 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.375863 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:13.516556 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:13.618096 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:13.740725 1118408 pod_ready.go:92] pod "kube-scheduler-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.740750 1118408 pod_ready.go:81] duration metric: took 372.227395ms waiting for pod "kube-scheduler-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.740761 1118408 pod_ready.go:38] duration metric: took 34.926028728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:02:13.740776 1118408 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:02:13.740838 1118408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:02:13.758113 1118408 api_server.go:72] duration metric: took 37.002858259s to wait for apiserver process to appear ...
	I1005 21:02:13.758138 1118408 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:02:13.758156 1118408 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 21:02:13.767317 1118408 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 21:02:13.768702 1118408 api_server.go:141] control plane version: v1.28.2
	I1005 21:02:13.768727 1118408 api_server.go:131] duration metric: took 10.58248ms to wait for apiserver health ...
	I1005 21:02:13.768736 1118408 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:02:13.862954 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:13.876318 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:13.947411 1118408 system_pods.go:59] 17 kube-system pods found
	I1005 21:02:13.947501 1118408 system_pods.go:61] "coredns-5dd5756b68-gltv9" [ded08413-2f7f-4fe4-8721-39eeaa369647] Running
	I1005 21:02:13.947527 1118408 system_pods.go:61] "csi-hostpath-attacher-0" [da87d806-c98a-436e-bd49-6aab2c6f317f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1005 21:02:13.947571 1118408 system_pods.go:61] "csi-hostpath-resizer-0" [ae0bb96c-13d5-4693-98db-a7f1d70ac2e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 21:02:13.947600 1118408 system_pods.go:61] "csi-hostpathplugin-pb6dg" [7823a263-b219-48db-9627-d2acfd754511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 21:02:13.947640 1118408 system_pods.go:61] "etcd-addons-223209" [edb40954-f17f-44a4-ad0c-c9048adcc8e5] Running
	I1005 21:02:13.947664 1118408 system_pods.go:61] "kindnet-t76t7" [052f693f-6a4f-4a65-ac52-0954ba7c723f] Running
	I1005 21:02:13.947685 1118408 system_pods.go:61] "kube-apiserver-addons-223209" [65ef5976-9ddf-47a5-8133-fabf3a8f8bbb] Running
	I1005 21:02:13.947722 1118408 system_pods.go:61] "kube-controller-manager-addons-223209" [b7fb6331-91d5-491d-ad96-798d169e4cda] Running
	I1005 21:02:13.947748 1118408 system_pods.go:61] "kube-ingress-dns-minikube" [2ce60d3d-8d03-433c-ab3c-d8d49e618785] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 21:02:13.947767 1118408 system_pods.go:61] "kube-proxy-gksxp" [401a617c-61e8-4dca-9fe7-1967c4c7bea9] Running
	I1005 21:02:13.947802 1118408 system_pods.go:61] "kube-scheduler-addons-223209" [a5506554-6773-48ca-99c3-4905e3b1f18b] Running
	I1005 21:02:13.947826 1118408 system_pods.go:61] "metrics-server-7c66d45ddc-sfsm4" [e1e3a8e3-0927-46e4-b6db-53c5e662952e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 21:02:13.947848 1118408 system_pods.go:61] "registry-8687b" [295664eb-0493-448c-865b-3496e891de88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1005 21:02:13.947886 1118408 system_pods.go:61] "registry-proxy-gw7w5" [6fd70213-4315-40d6-b46c-96d44c97c78a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 21:02:13.947915 1118408 system_pods.go:61] "snapshot-controller-58dbcc7b99-4zqqz" [8876d792-1c61-4945-a509-e8406bb689b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:13.947938 1118408 system_pods.go:61] "snapshot-controller-58dbcc7b99-ln2f7" [abeec72e-8987-44c8-a351-0b5eabfdb781] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:13.947972 1118408 system_pods.go:61] "storage-provisioner" [5329506e-b2cb-42d6-9999-04091a5ddda2] Running
	I1005 21:02:13.947997 1118408 system_pods.go:74] duration metric: took 179.255093ms to wait for pod list to return data ...
	I1005 21:02:13.948018 1118408 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:02:14.016400 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:14.117685 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:14.140182 1118408 default_sa.go:45] found service account: "default"
	I1005 21:02:14.140244 1118408 default_sa.go:55] duration metric: took 192.193237ms for default service account to be created ...
	I1005 21:02:14.140282 1118408 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:02:14.347541 1118408 system_pods.go:86] 17 kube-system pods found
	I1005 21:02:14.347613 1118408 system_pods.go:89] "coredns-5dd5756b68-gltv9" [ded08413-2f7f-4fe4-8721-39eeaa369647] Running
	I1005 21:02:14.347639 1118408 system_pods.go:89] "csi-hostpath-attacher-0" [da87d806-c98a-436e-bd49-6aab2c6f317f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1005 21:02:14.347664 1118408 system_pods.go:89] "csi-hostpath-resizer-0" [ae0bb96c-13d5-4693-98db-a7f1d70ac2e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 21:02:14.347704 1118408 system_pods.go:89] "csi-hostpathplugin-pb6dg" [7823a263-b219-48db-9627-d2acfd754511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 21:02:14.347724 1118408 system_pods.go:89] "etcd-addons-223209" [edb40954-f17f-44a4-ad0c-c9048adcc8e5] Running
	I1005 21:02:14.347746 1118408 system_pods.go:89] "kindnet-t76t7" [052f693f-6a4f-4a65-ac52-0954ba7c723f] Running
	I1005 21:02:14.347776 1118408 system_pods.go:89] "kube-apiserver-addons-223209" [65ef5976-9ddf-47a5-8133-fabf3a8f8bbb] Running
	I1005 21:02:14.347798 1118408 system_pods.go:89] "kube-controller-manager-addons-223209" [b7fb6331-91d5-491d-ad96-798d169e4cda] Running
	I1005 21:02:14.347820 1118408 system_pods.go:89] "kube-ingress-dns-minikube" [2ce60d3d-8d03-433c-ab3c-d8d49e618785] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 21:02:14.347840 1118408 system_pods.go:89] "kube-proxy-gksxp" [401a617c-61e8-4dca-9fe7-1967c4c7bea9] Running
	I1005 21:02:14.347860 1118408 system_pods.go:89] "kube-scheduler-addons-223209" [a5506554-6773-48ca-99c3-4905e3b1f18b] Running
	I1005 21:02:14.347895 1118408 system_pods.go:89] "metrics-server-7c66d45ddc-sfsm4" [e1e3a8e3-0927-46e4-b6db-53c5e662952e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 21:02:14.347917 1118408 system_pods.go:89] "registry-8687b" [295664eb-0493-448c-865b-3496e891de88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1005 21:02:14.347938 1118408 system_pods.go:89] "registry-proxy-gw7w5" [6fd70213-4315-40d6-b46c-96d44c97c78a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 21:02:14.347974 1118408 system_pods.go:89] "snapshot-controller-58dbcc7b99-4zqqz" [8876d792-1c61-4945-a509-e8406bb689b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:14.348000 1118408 system_pods.go:89] "snapshot-controller-58dbcc7b99-ln2f7" [abeec72e-8987-44c8-a351-0b5eabfdb781] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:14.348019 1118408 system_pods.go:89] "storage-provisioner" [5329506e-b2cb-42d6-9999-04091a5ddda2] Running
	I1005 21:02:14.348044 1118408 system_pods.go:126] duration metric: took 207.735159ms to wait for k8s-apps to be running ...
	I1005 21:02:14.348074 1118408 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:02:14.348185 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:02:14.361910 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:14.368367 1118408 system_svc.go:56] duration metric: took 20.283725ms WaitForService to wait for kubelet.
	I1005 21:02:14.368442 1118408 kubeadm.go:581] duration metric: took 37.613193211s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:02:14.368476 1118408 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:02:14.376477 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:14.517633 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:14.540601 1118408 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:02:14.540687 1118408 node_conditions.go:123] node cpu capacity is 2
	I1005 21:02:14.540714 1118408 node_conditions.go:105] duration metric: took 172.204781ms to run NodePressure ...
	I1005 21:02:14.540740 1118408 start.go:228] waiting for startup goroutines ...
	I1005 21:02:14.618343 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:14.862431 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:14.877156 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:15.021829 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:15.119295 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:15.366553 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:15.379668 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:15.516757 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:15.618715 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:15.861781 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:15.876148 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:16.016398 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:16.118542 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:16.362615 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:16.376204 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:16.516652 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:16.618681 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:16.862744 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:16.875397 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:17.017458 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:17.118403 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:17.364333 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:17.377193 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:17.516026 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:17.618123 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:17.861791 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:17.876238 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:18.016542 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:18.118567 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:18.364098 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:18.380246 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:18.516185 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:18.617953 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:18.861623 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:18.876142 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:19.016458 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:19.118264 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:19.377874 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:19.379169 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:19.518068 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:19.617942 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:19.861761 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:19.875834 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:20.017539 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:20.118612 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:20.361931 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:20.375574 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:20.521082 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:20.617579 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:20.862023 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:20.875773 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:21.017784 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:21.118044 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:21.361725 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:21.376296 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:21.516295 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:21.620100 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:21.864234 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:21.875912 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:22.016679 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:22.120068 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:22.363304 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:22.375833 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:22.516730 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:22.618479 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:22.864844 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:22.877177 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:23.016016 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:23.117635 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:23.362291 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:23.375752 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:23.516391 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:23.617999 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:23.861937 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:23.875498 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:24.016165 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:24.118472 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:24.361747 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:24.375689 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:24.516645 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:24.618759 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:24.861774 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:24.875295 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:25.017081 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:25.118461 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:25.361961 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:25.375378 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:25.516687 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:25.618662 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:25.862089 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:25.875689 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:26.016332 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:26.118787 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:26.362183 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:26.377136 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:26.515905 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:26.618806 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:26.862072 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:26.876004 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:27.015946 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:27.117982 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:27.367736 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:27.381301 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:27.516149 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:27.617918 1118408 kapi.go:107] duration metric: took 41.010108165s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1005 21:02:27.621087 1118408 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-223209 cluster.
	I1005 21:02:27.623442 1118408 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1005 21:02:27.625508 1118408 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1005 21:02:27.861839 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:27.875351 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:28.015924 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:28.370451 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:28.376200 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:28.516141 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:28.861440 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:28.876249 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:29.016482 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:29.362496 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:29.379185 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:29.515688 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:29.866820 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:29.876015 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:30.030958 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:30.361546 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:30.376507 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:30.516906 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:30.862239 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:30.882986 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:31.016348 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:31.362588 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:31.376617 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:31.516826 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:31.861460 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:31.876292 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:32.016568 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:32.366196 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:32.378614 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:32.516551 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:32.863778 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:32.876144 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:33.016599 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:33.365486 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:33.375241 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:33.515910 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:33.861507 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:33.876388 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:34.016513 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:34.362618 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:34.376626 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:34.516921 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:34.861640 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:34.875016 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:35.019284 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:35.361888 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:35.375635 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:35.516084 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:35.861772 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:35.875582 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:36.017383 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:36.363749 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:36.376439 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:36.516029 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:36.861357 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:36.878888 1118408 kapi.go:107] duration metric: took 53.530936503s to wait for kubernetes.io/minikube-addons=registry ...
	I1005 21:02:37.017714 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:37.361346 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:37.515967 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:37.862819 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:38.021497 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:38.362280 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:38.516382 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:38.861359 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:39.016285 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:39.361820 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:39.517301 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:39.865330 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:40.017482 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:40.362232 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:40.516669 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:40.862432 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:41.018431 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:41.363209 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:41.516901 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:41.862041 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:42.022133 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:42.362340 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:42.515875 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:42.861633 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:43.017192 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:43.362061 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:43.517058 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:43.861426 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:44.017305 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:44.366174 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:44.516662 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:44.861566 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:45.020121 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:45.362548 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:45.517209 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:45.862364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:46.016948 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:46.362777 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:46.517327 1118408 kapi.go:107] duration metric: took 1m1.026383851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1005 21:02:46.862211 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:47.361620 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:47.861393 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:48.362210 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:48.861804 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:49.361391 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:49.863289 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:50.365832 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:50.861485 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:51.362364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:51.862394 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:52.361782 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:52.864667 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:53.361795 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:53.864043 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:54.362364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:54.862770 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:55.365448 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:55.862398 1118408 kapi.go:107] duration metric: took 1m12.519575187s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1005 21:02:55.864705 1118408 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, volumesnapshots, gcp-auth, registry, csi-hostpath-driver, ingress
	I1005 21:02:55.866741 1118408 addons.go:502] enable addons completed in 1m19.504053325s: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner inspektor-gadget metrics-server volumesnapshots gcp-auth registry csi-hostpath-driver ingress]
	I1005 21:02:55.866792 1118408 start.go:233] waiting for cluster config update ...
	I1005 21:02:55.866813 1118408 start.go:242] writing updated cluster config ...
	I1005 21:02:55.867155 1118408 ssh_runner.go:195] Run: rm -f paused
	I1005 21:02:55.931264 1118408 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:02:55.933845 1118408 out.go:177] * Done! kubectl is now configured to use "addons-223209" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	37abe690a997c       97e050c3e21e9       9 seconds ago        Exited              hello-world-app           2                   fc3640e034001       hello-world-app-5d77478584-6slp8
	e7b6c3310facc       645adbf280ba8       20 seconds ago       Exited              cloud-spanner-emulator    4                   4b4a19b4fbc6b       cloud-spanner-emulator-7d49f968d9-vdzvm
	e0afd0fac454f       df8fd1ca35d66       33 seconds ago       Running             nginx                     0                   5596ae829c974       nginx
	7c9a071a46ebc       dfcd119260332       59 seconds ago       Running             headlamp                  0                   fe19f6eeb4a39       headlamp-58b88cff49-4xgkh
	48669877288ad       0fa733f52482a       About a minute ago   Exited              controller                0                   ecd0d3fc28fcc       ingress-nginx-controller-5c4c674fdc-mvqqv
	ba40c0d1610b6       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   98d942ed7d056       gcp-auth-d4c87556c-52sbn
	2f25382f9b0e0       8f2588812ab29       About a minute ago   Exited              patch                     0                   226f84619c3c3       ingress-nginx-admission-patch-jzg5h
	ef98c594d70a5       8f2588812ab29       About a minute ago   Exited              create                    0                   7173d9405f75b       ingress-nginx-admission-create-hlwd9
	33aec96dcfdad       7ce2150c8929b       About a minute ago   Running             local-path-provisioner    0                   f55bf0dfe7977       local-path-provisioner-78b46b4d5c-m2r65
	b99ce8af0b748       97e04611ad434       2 minutes ago        Running             coredns                   0                   445b65dbeedb1       coredns-5dd5756b68-gltv9
	9718bda95560a       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   8e161276df44c       storage-provisioner
	19d1a92e02d00       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni               0                   bdd9f6134c133       kindnet-t76t7
	2a7610d2c6238       7da62c127fc0f       2 minutes ago        Running             kube-proxy                0                   c1af03799f925       kube-proxy-gksxp
	074e3cb411fba       9cdd6470f48c8       3 minutes ago        Running             etcd                      0                   aff4f25b98799       etcd-addons-223209
	695e20ff6ea91       30bb499447fe1       3 minutes ago        Running             kube-apiserver            0                   c39ed38a3aed0       kube-apiserver-addons-223209
	0e7e370fa42eb       64fc40cee3716       3 minutes ago        Running             kube-scheduler            0                   9cceea69816f2       kube-scheduler-addons-223209
	dca30c30326b6       89d57b83c1786       3 minutes ago        Running             kube-controller-manager   0                   dbf3dc8b11352       kube-controller-manager-addons-223209
	
	* 
	* ==> containerd <==
	* Oct 05 21:04:09 addons-223209 containerd[746]: time="2023-10-05T21:04:09.155949921Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:04:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11004 runtime=io.containerd.runc.v2\n"
	Oct 05 21:04:09 addons-223209 containerd[746]: time="2023-10-05T21:04:09.172582620Z" level=info msg="TearDown network for sandbox \"29ee4499c6d38735b6843049b522f85dc05e622213b7a361dac125f41d208191\" successfully"
	Oct 05 21:04:09 addons-223209 containerd[746]: time="2023-10-05T21:04:09.172639219Z" level=info msg="StopPodSandbox for \"29ee4499c6d38735b6843049b522f85dc05e622213b7a361dac125f41d208191\" returns successfully"
	Oct 05 21:04:09 addons-223209 containerd[746]: time="2023-10-05T21:04:09.200293459Z" level=info msg="TearDown network for sandbox \"1608bfd04c40eb204d99447851a1e5fb58f60feae74c96dab95f65e0001b3f3d\" successfully"
	Oct 05 21:04:09 addons-223209 containerd[746]: time="2023-10-05T21:04:09.200348606Z" level=info msg="StopPodSandbox for \"1608bfd04c40eb204d99447851a1e5fb58f60feae74c96dab95f65e0001b3f3d\" returns successfully"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.047370189Z" level=info msg="RemoveContainer for \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\""
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.058483423Z" level=info msg="RemoveContainer for \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\" returns successfully"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.059434639Z" level=error msg="ContainerStatus for \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\": not found"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.064292305Z" level=info msg="RemoveContainer for \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\""
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.070888761Z" level=info msg="RemoveContainer for \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\" returns successfully"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.071696905Z" level=error msg="ContainerStatus for \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\": not found"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.854430965Z" level=info msg="Kill container \"48669877288ad0a6ec10ddac151311edb54ba2c341af67a1391b0059ee26de49\""
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.948021574Z" level=info msg="shim disconnected" id=48669877288ad0a6ec10ddac151311edb54ba2c341af67a1391b0059ee26de49
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.948093344Z" level=warning msg="cleaning up after shim disconnected" id=48669877288ad0a6ec10ddac151311edb54ba2c341af67a1391b0059ee26de49 namespace=k8s.io
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.948104626Z" level=info msg="cleaning up dead shim"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.958667200Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:04:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11087 runtime=io.containerd.runc.v2\n"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.962213054Z" level=info msg="StopContainer for \"48669877288ad0a6ec10ddac151311edb54ba2c341af67a1391b0059ee26de49\" returns successfully"
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.962862388Z" level=info msg="StopPodSandbox for \"ecd0d3fc28fcc7bb60d5f68f46153aa577ea9900458297256161df26c677a65e\""
	Oct 05 21:04:10 addons-223209 containerd[746]: time="2023-10-05T21:04:10.962934060Z" level=info msg="Container to stop \"48669877288ad0a6ec10ddac151311edb54ba2c341af67a1391b0059ee26de49\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 05 21:04:11 addons-223209 containerd[746]: time="2023-10-05T21:04:11.010162162Z" level=info msg="shim disconnected" id=ecd0d3fc28fcc7bb60d5f68f46153aa577ea9900458297256161df26c677a65e
	Oct 05 21:04:11 addons-223209 containerd[746]: time="2023-10-05T21:04:11.010235712Z" level=warning msg="cleaning up after shim disconnected" id=ecd0d3fc28fcc7bb60d5f68f46153aa577ea9900458297256161df26c677a65e namespace=k8s.io
	Oct 05 21:04:11 addons-223209 containerd[746]: time="2023-10-05T21:04:11.010247634Z" level=info msg="cleaning up dead shim"
	Oct 05 21:04:11 addons-223209 containerd[746]: time="2023-10-05T21:04:11.021022770Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:04:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11118 runtime=io.containerd.runc.v2\n"
	Oct 05 21:04:11 addons-223209 containerd[746]: time="2023-10-05T21:04:11.072036019Z" level=info msg="TearDown network for sandbox \"ecd0d3fc28fcc7bb60d5f68f46153aa577ea9900458297256161df26c677a65e\" successfully"
	Oct 05 21:04:11 addons-223209 containerd[746]: time="2023-10-05T21:04:11.072103858Z" level=info msg="StopPodSandbox for \"ecd0d3fc28fcc7bb60d5f68f46153aa577ea9900458297256161df26c677a65e\" returns successfully"
	
	* 
	* ==> coredns [b99ce8af0b748fd898662415f7aa7e13d1ec51059aba0bbc2ad32a42117ba79b] <==
	* [INFO] 10.244.0.18:45757 - 11702 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061046s
	[INFO] 10.244.0.18:45757 - 10091 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062285s
	[INFO] 10.244.0.18:48438 - 5020 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00268624s
	[INFO] 10.244.0.18:45757 - 48622 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001636693s
	[INFO] 10.244.0.18:48438 - 18773 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000134531s
	[INFO] 10.244.0.18:45757 - 29870 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002212838s
	[INFO] 10.244.0.18:45757 - 30990 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098674s
	[INFO] 10.244.0.18:39548 - 31275 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000138888s
	[INFO] 10.244.0.18:46810 - 35771 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037136s
	[INFO] 10.244.0.18:46810 - 52256 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090158s
	[INFO] 10.244.0.18:39548 - 58767 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000079901s
	[INFO] 10.244.0.18:46810 - 56354 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063729s
	[INFO] 10.244.0.18:39548 - 9052 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044381s
	[INFO] 10.244.0.18:46810 - 28100 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038031s
	[INFO] 10.244.0.18:39548 - 38476 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041911s
	[INFO] 10.244.0.18:46810 - 34069 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035273s
	[INFO] 10.244.0.18:39548 - 32284 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031598s
	[INFO] 10.244.0.18:46810 - 65068 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040919s
	[INFO] 10.244.0.18:39548 - 57088 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049452s
	[INFO] 10.244.0.18:46810 - 54794 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001549121s
	[INFO] 10.244.0.18:39548 - 57558 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001082242s
	[INFO] 10.244.0.18:46810 - 20056 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001147416s
	[INFO] 10.244.0.18:39548 - 30454 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000850621s
	[INFO] 10.244.0.18:39548 - 27445 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000111327s
	[INFO] 10.244.0.18:46810 - 28897 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000095376s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-223209
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-223209
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=addons-223209
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_01_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-223209
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:01:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-223209
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:03:57 +0000   Thu, 05 Oct 2023 21:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:03:57 +0000   Thu, 05 Oct 2023 21:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:03:57 +0000   Thu, 05 Oct 2023 21:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:03:57 +0000   Thu, 05 Oct 2023 21:01:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-223209
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 d05e31b9cee54500949e5b5b6300f221
	  System UUID:                4060a633-0e01-4f8d-a752-012b1f3e17a0
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-7d49f968d9-vdzvm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  default                     hello-world-app-5d77478584-6slp8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-52sbn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  headlamp                    headlamp-58b88cff49-4xgkh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 coredns-5dd5756b68-gltv9                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m40s
	  kube-system                 etcd-addons-223209                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m52s
	  kube-system                 kindnet-t76t7                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m40s
	  kube-system                 kube-apiserver-addons-223209               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-controller-manager-addons-223209      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-proxy-gksxp                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-scheduler-addons-223209               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  local-path-storage          local-path-provisioner-78b46b4d5c-m2r65    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m38s                kube-proxy       
	  Normal  Starting                 3m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m1s (x8 over 3m1s)  kubelet          Node addons-223209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x8 over 3m1s)  kubelet          Node addons-223209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x7 over 3m1s)  kubelet          Node addons-223209 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m53s                kubelet          Node addons-223209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s                kubelet          Node addons-223209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s                kubelet          Node addons-223209 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m53s                kubelet          Node addons-223209 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m52s                kubelet          Node addons-223209 status is now: NodeReady
	  Normal  RegisteredNode           2m41s                node-controller  Node addons-223209 event: Registered Node addons-223209 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001082] FS-Cache: O-key=[8] '0f613b0000000000'
	[  +0.000687] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000c2b6221a
	[  +0.001022] FS-Cache: N-key=[8] '0f613b0000000000'
	[  +0.002852] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=0000003b [p=00000039 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000946bf243
	[  +0.001102] FS-Cache: O-key=[8] '0f613b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000007e5c81de
	[  +0.001243] FS-Cache: N-key=[8] '0f613b0000000000'
	[  +2.263905] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000943] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000c8b3c65a
	[  +0.001019] FS-Cache: O-key=[8] '0e613b0000000000'
	[  +0.000696] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ef961079
	[  +0.001093] FS-Cache: N-key=[8] '0e613b0000000000'
	[  +0.429619] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001097] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=000000000d4a2f1f
	[  +0.001076] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000c2b6221a
	[  +0.001153] FS-Cache: N-key=[8] '14613b0000000000'
	
	* 
	* ==> etcd [074e3cb411fbae49fc6f05f0294dd7d0dde8aaee6560dea8418e5fabada34035] <==
	* {"level":"info","ts":"2023-10-05T21:01:16.339337Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:01:16.339782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-05T21:01:16.344474Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-05T21:01:16.344875Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-05T21:01:16.355374Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:01:16.356324Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:01:16.355906Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-05T21:01:16.710677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-05T21:01:16.710883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-05T21:01:16.710993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-05T21:01:16.711093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.711176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.711273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.711361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.715193Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.717368Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-223209 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:01:16.717522Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:01:16.718449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.720579Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.720749Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.719254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-05T21:01:16.730583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:01:16.731861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:01:16.732379Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:01:16.732579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [ba40c0d1610b60d45a5d26be4aab2af80d19a2158540f1a80e949ab6abe3ca61] <==
	* 2023/10/05 21:02:26 GCP Auth Webhook started!
	2023/10/05 21:02:56 Ready to marshal response ...
	2023/10/05 21:02:56 Ready to write response ...
	2023/10/05 21:02:57 Ready to marshal response ...
	2023/10/05 21:02:57 Ready to write response ...
	2023/10/05 21:03:04 Ready to marshal response ...
	2023/10/05 21:03:04 Ready to write response ...
	2023/10/05 21:03:06 Ready to marshal response ...
	2023/10/05 21:03:06 Ready to write response ...
	2023/10/05 21:03:12 Ready to marshal response ...
	2023/10/05 21:03:12 Ready to write response ...
	2023/10/05 21:03:12 Ready to marshal response ...
	2023/10/05 21:03:12 Ready to write response ...
	2023/10/05 21:03:12 Ready to marshal response ...
	2023/10/05 21:03:12 Ready to write response ...
	2023/10/05 21:03:25 Ready to marshal response ...
	2023/10/05 21:03:25 Ready to write response ...
	2023/10/05 21:03:40 Ready to marshal response ...
	2023/10/05 21:03:40 Ready to write response ...
	2023/10/05 21:03:50 Ready to marshal response ...
	2023/10/05 21:03:50 Ready to write response ...
	2023/10/05 21:03:50 Ready to marshal response ...
	2023/10/05 21:03:50 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:04:16 up  6:46,  0 users,  load average: 1.04, 2.05, 2.75
	Linux addons-223209 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [19d1a92e02d001e34ff2efd87c919c25cb07284a518f6a3f353c4ac05f21495b] <==
	* I1005 21:02:07.883297       1 main.go:227] handling current node
	I1005 21:02:17.898927       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:17.898955       1 main.go:227] handling current node
	I1005 21:02:27.910864       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:27.910890       1 main.go:227] handling current node
	I1005 21:02:37.922480       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:37.922507       1 main.go:227] handling current node
	I1005 21:02:47.934081       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:47.934109       1 main.go:227] handling current node
	I1005 21:02:57.946453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:57.946484       1 main.go:227] handling current node
	I1005 21:03:07.951270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:07.951297       1 main.go:227] handling current node
	I1005 21:03:17.964565       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:17.964789       1 main.go:227] handling current node
	I1005 21:03:27.977653       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:27.977680       1 main.go:227] handling current node
	I1005 21:03:37.990434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:37.990465       1 main.go:227] handling current node
	I1005 21:03:47.994393       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:47.994422       1 main.go:227] handling current node
	I1005 21:03:58.010603       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:58.010637       1 main.go:227] handling current node
	I1005 21:04:08.015132       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:04:08.015163       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [695e20ff6ea91a4042a7bc2aa3dc2d17272a444da85849c259b420a17f912233] <==
	* W1005 21:03:29.375493       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1005 21:03:36.920444       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1005 21:03:37.844317       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400c3dc6f0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4006ef94f0), ResponseWriter:(*httpsnoop.rw)(0x4006ef94f0), Flusher:(*httpsnoop.rw)(0x4006ef94f0), CloseNotifier:(*httpsnoop.rw)(0x4006ef94f0), Pusher:(*httpsnoop.rw)(0x4006ef94f0)}}, encoder:(*versioning.codec)(0x4009868640), memAllocator:(*runtime.Allocator)(0x40085f2d98)})
	I1005 21:03:40.229705       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1005 21:03:40.662969       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.70.208"}
	I1005 21:03:50.505808       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.69.75"}
	I1005 21:04:08.793760       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.793803       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.804130       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.804186       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.824752       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.824809       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.847134       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.847197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.851980       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.852024       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.859098       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.859162       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.882606       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.882662       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:04:08.892151       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:04:08.892191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1005 21:04:09.843481       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1005 21:04:09.892606       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1005 21:04:09.912128       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [dca30c30326b6c7c6dbe5bd35f3ffbfae4fb6fab8afb039c65ee2839527683d1] <==
	* I1005 21:04:02.214504       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I1005 21:04:02.306043       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W1005 21:04:04.678869       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:04.678901       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1005 21:04:07.034363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.791µs"
	I1005 21:04:07.751909       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1005 21:04:07.781642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="6.704µs"
	I1005 21:04:07.786124       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1005 21:04:07.804292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="43.807µs"
	I1005 21:04:08.938301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="9.132µs"
	E1005 21:04:09.845340       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:09.894713       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:09.913868       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:04:11.058071       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:11.058103       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:04:11.204980       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:11.205014       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:04:11.362885       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:11.362920       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:04:13.202782       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:13.202816       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:04:13.804187       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:13.804223       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:04:13.883174       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:04:13.883208       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [2a7610d2c6238462ee37cbd4526d197a8a86a235fba199c56917eccfb223ba73] <==
	* I1005 21:01:37.522916       1 server_others.go:69] "Using iptables proxy"
	I1005 21:01:37.543860       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1005 21:01:37.635621       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:01:37.637849       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:01:37.637881       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:01:37.637889       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:01:37.637949       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:01:37.638183       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:01:37.638194       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:01:37.641190       1 config.go:188] "Starting service config controller"
	I1005 21:01:37.641254       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:01:37.641315       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:01:37.641321       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:01:37.641931       1 config.go:315] "Starting node config controller"
	I1005 21:01:37.641939       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:01:37.743502       1 shared_informer.go:318] Caches are synced for node config
	I1005 21:01:37.743530       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:01:37.743557       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0e7e370fa42eb35f418c414fe0709d9c0be0850bdfdc67f90e38ca54153ee27f] <==
	* I1005 21:01:19.976978       1 serving.go:348] Generated self-signed cert in-memory
	W1005 21:01:21.611710       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1005 21:01:21.611933       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:01:21.612026       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1005 21:01:21.612098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 21:01:21.631734       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1005 21:01:21.632014       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:01:21.634164       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1005 21:01:21.634454       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1005 21:01:21.634571       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:01:21.634679       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1005 21:01:21.648397       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:01:21.657427       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1005 21:01:23.135119       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.302311    1344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsnn9\" (UniqueName: \"kubernetes.io/projected/8876d792-1c61-4945-a509-e8406bb689b7-kube-api-access-rsnn9\") pod \"8876d792-1c61-4945-a509-e8406bb689b7\" (UID: \"8876d792-1c61-4945-a509-e8406bb689b7\") "
	Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.304972    1344 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8876d792-1c61-4945-a509-e8406bb689b7-kube-api-access-rsnn9" (OuterVolumeSpecName: "kube-api-access-rsnn9") pod "8876d792-1c61-4945-a509-e8406bb689b7" (UID: "8876d792-1c61-4945-a509-e8406bb689b7"). InnerVolumeSpecName "kube-api-access-rsnn9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.305405    1344 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abeec72e-8987-44c8-a351-0b5eabfdb781-kube-api-access-ttlbk" (OuterVolumeSpecName: "kube-api-access-ttlbk") pod "abeec72e-8987-44c8-a351-0b5eabfdb781" (UID: "abeec72e-8987-44c8-a351-0b5eabfdb781"). InnerVolumeSpecName "kube-api-access-ttlbk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.402692    1344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ttlbk\" (UniqueName: \"kubernetes.io/projected/abeec72e-8987-44c8-a351-0b5eabfdb781-kube-api-access-ttlbk\") on node \"addons-223209\" DevicePath \"\""
	Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.402911    1344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rsnn9\" (UniqueName: \"kubernetes.io/projected/8876d792-1c61-4945-a509-e8406bb689b7-kube-api-access-rsnn9\") on node \"addons-223209\" DevicePath \"\""
	Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.748892    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0746fb32-d408-4470-a2f8-c08af607cf93" path="/var/lib/kubelet/pods/0746fb32-d408-4470-a2f8-c08af607cf93/volumes"
	Oct 05 21:04:09 addons-223209 kubelet[1344]: I1005 21:04:09.749280    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="731c09aa-5a22-4288-a2fc-42870164a93e" path="/var/lib/kubelet/pods/731c09aa-5a22-4288-a2fc-42870164a93e/volumes"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: I1005 21:04:10.042849    1344 scope.go:117] "RemoveContainer" containerID="47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: I1005 21:04:10.058941    1344 scope.go:117] "RemoveContainer" containerID="47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: E1005 21:04:10.059748    1344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\": not found" containerID="47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: I1005 21:04:10.059922    1344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139"} err="failed to get container status \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\": rpc error: code = NotFound desc = an error occurred when try to find container \"47ccf121fa752a90d6975bf811220ec7e60f368b50d23dd4a30ade9640511139\": not found"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: I1005 21:04:10.059948    1344 scope.go:117] "RemoveContainer" containerID="5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: I1005 21:04:10.071235    1344 scope.go:117] "RemoveContainer" containerID="5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: E1005 21:04:10.072273    1344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\": not found" containerID="5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c"
	Oct 05 21:04:10 addons-223209 kubelet[1344]: I1005 21:04:10.072322    1344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c"} err="failed to get container status \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5184c7daed3c8d52fbe557735fc8f6a30b17a5b48b27393f1bdfbf32a9421f6c\": not found"
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.052060    1344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecd0d3fc28fcc7bb60d5f68f46153aa577ea9900458297256161df26c677a65e"
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.214030    1344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dcfeb525-388e-438f-83a6-36e8fae747fa-webhook-cert\") pod \"dcfeb525-388e-438f-83a6-36e8fae747fa\" (UID: \"dcfeb525-388e-438f-83a6-36e8fae747fa\") "
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.214086    1344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tslc8\" (UniqueName: \"kubernetes.io/projected/dcfeb525-388e-438f-83a6-36e8fae747fa-kube-api-access-tslc8\") pod \"dcfeb525-388e-438f-83a6-36e8fae747fa\" (UID: \"dcfeb525-388e-438f-83a6-36e8fae747fa\") "
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.217045    1344 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcfeb525-388e-438f-83a6-36e8fae747fa-kube-api-access-tslc8" (OuterVolumeSpecName: "kube-api-access-tslc8") pod "dcfeb525-388e-438f-83a6-36e8fae747fa" (UID: "dcfeb525-388e-438f-83a6-36e8fae747fa"). InnerVolumeSpecName "kube-api-access-tslc8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.220547    1344 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcfeb525-388e-438f-83a6-36e8fae747fa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dcfeb525-388e-438f-83a6-36e8fae747fa" (UID: "dcfeb525-388e-438f-83a6-36e8fae747fa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.314967    1344 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dcfeb525-388e-438f-83a6-36e8fae747fa-webhook-cert\") on node \"addons-223209\" DevicePath \"\""
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.315012    1344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tslc8\" (UniqueName: \"kubernetes.io/projected/dcfeb525-388e-438f-83a6-36e8fae747fa-kube-api-access-tslc8\") on node \"addons-223209\" DevicePath \"\""
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.749125    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8876d792-1c61-4945-a509-e8406bb689b7" path="/var/lib/kubelet/pods/8876d792-1c61-4945-a509-e8406bb689b7/volumes"
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.749498    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="abeec72e-8987-44c8-a351-0b5eabfdb781" path="/var/lib/kubelet/pods/abeec72e-8987-44c8-a351-0b5eabfdb781/volumes"
	Oct 05 21:04:11 addons-223209 kubelet[1344]: I1005 21:04:11.749926    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dcfeb525-388e-438f-83a6-36e8fae747fa" path="/var/lib/kubelet/pods/dcfeb525-388e-438f-83a6-36e8fae747fa/volumes"
	
	* 
	* ==> storage-provisioner [9718bda95560a4870592207607f9cd87fe6edee77674558bcefb13eb58071cc5] <==
	* I1005 21:01:42.822421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 21:01:42.857212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 21:01:42.860987       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 21:01:42.890859       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 21:01:42.891324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ebf97a6-726e-4529-8c22-73db6f04d521", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-223209_e43f7678-8b88-4326-964e-300d0baacbd1 became leader
	I1005 21:01:42.891357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-223209_e43f7678-8b88-4326-964e-300d0baacbd1!
	I1005 21:01:42.991709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-223209_e43f7678-8b88-4326-964e-300d0baacbd1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-223209 -n addons-223209
helpers_test.go:261: (dbg) Run:  kubectl --context addons-223209 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (9.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-vdzvm" [292e37e0-8ebd-4168-bfd6-2aa4f4962e85] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012709205s
addons_test.go:858: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-223209
addons_test.go:858: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-223209: exit status 11 (682.114084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-05T21:03:10Z" level=error msg="stat /run/containerd/runc/k8s.io/fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:859: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 addons disable cloud-spanner -p addons-223209" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-223209
helpers_test.go:235: (dbg) docker inspect addons-223209:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e",
	        "Created": "2023-10-05T21:01:00.758903804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1118869,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:01:01.095576047Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/hosts",
	        "LogPath": "/var/lib/docker/containers/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e/ed307c47b5761e656fb4cc84c529ed4def102fe612de45fc60e938afd7917f8e-json.log",
	        "Name": "/addons-223209",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-223209:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-223209",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d-init/diff:/var/lib/docker/overlay2/0ac9dde3ffb5508a08f1d2d343ad7198828af6fb1810d9bf7c6479a8d59aaca8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4be1bb810218b003196125b5319a890d852f49b4aeb4488c0023f40e064e020d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-223209",
	                "Source": "/var/lib/docker/volumes/addons-223209/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-223209",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-223209",
	                "name.minikube.sigs.k8s.io": "addons-223209",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa1abcf8677c05b539804f8c24e8a6d9339b9d34885799dc4fc917d69d48f6da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34008"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34007"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34004"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34006"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34005"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aa1abcf8677c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-223209": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ed307c47b576",
	                        "addons-223209"
	                    ],
	                    "NetworkID": "e57d17fa4807df22d27e586abf820741faf5db521f740672ffc05b138f35425a",
	                    "EndpointID": "e898ac43b4f5f0ba7ca2c85824713d2544cc2616522a7c05a2cfa7433382fab8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-223209 -n addons-223209
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-223209 logs -n 25: (2.615740175s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | -p download-only-610377                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | -p download-only-610377                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| delete  | -p download-only-610377                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| delete  | -p download-only-610377                                                                     | download-only-610377   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| start   | --download-only -p                                                                          | download-docker-853390 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | download-docker-853390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-853390                                                                   | download-docker-853390 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-750535   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | binary-mirror-750535                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36693                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-750535                                                                     | binary-mirror-750535   | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:00 UTC |
	| addons  | disable dashboard -p                                                                        | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |                     |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-223209 --wait=true                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC | 05 Oct 23 21:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-223209 ssh cat                                                                       | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | /opt/local-path-provisioner/pvc-f6a4555f-aa36-48f9-875a-61866ab03538_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-223209 addons disable                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-223209 ip                                                                            | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	| addons  | addons-223209 addons disable                                                                | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC | 05 Oct 23 21:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC |                     |
	|         | addons-223209                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-223209          | jenkins | v1.31.2 | 05 Oct 23 21:03 UTC |                     |
	|         | -p addons-223209                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:00:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:00:37.008164 1118408 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:00:37.008539 1118408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:37.008574 1118408 out.go:309] Setting ErrFile to fd 2...
	I1005 21:00:37.008595 1118408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:37.008898 1118408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:00:37.009498 1118408 out.go:303] Setting JSON to false
	I1005 21:00:37.010701 1118408 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24183,"bootTime":1696515454,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:00:37.010854 1118408 start.go:138] virtualization:  
	I1005 21:00:37.014434 1118408 out.go:177] * [addons-223209] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:00:37.017590 1118408 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:00:37.017764 1118408 notify.go:220] Checking for updates...
	I1005 21:00:37.019995 1118408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:00:37.022564 1118408 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:00:37.024841 1118408 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:00:37.027548 1118408 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:00:37.029708 1118408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:00:37.032217 1118408 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:00:37.060471 1118408 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:00:37.060574 1118408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:37.140521 1118408 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:00:37.129978497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:37.140622 1118408 docker.go:294] overlay module found
	I1005 21:00:37.143010 1118408 out.go:177] * Using the docker driver based on user configuration
	I1005 21:00:37.145064 1118408 start.go:298] selected driver: docker
	I1005 21:00:37.145084 1118408 start.go:902] validating driver "docker" against <nil>
	I1005 21:00:37.145100 1118408 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:00:37.145795 1118408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:37.215225 1118408 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:00:37.205554357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:37.215389 1118408 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:00:37.215621 1118408 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 21:00:37.217783 1118408 out.go:177] * Using Docker driver with root privileges
	I1005 21:00:37.219840 1118408 cni.go:84] Creating CNI manager for ""
	I1005 21:00:37.219861 1118408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:00:37.219873 1118408 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:00:37.219894 1118408 start_flags.go:321] config:
	{Name:addons-223209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:00:37.222428 1118408 out.go:177] * Starting control plane node addons-223209 in cluster addons-223209
	I1005 21:00:37.224224 1118408 cache.go:122] Beginning downloading kic base image for docker with containerd
	I1005 21:00:37.225842 1118408 out.go:177] * Pulling base image ...
	I1005 21:00:37.227570 1118408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:00:37.227622 1118408 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4
	I1005 21:00:37.227635 1118408 cache.go:57] Caching tarball of preloaded images
	I1005 21:00:37.227661 1118408 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:00:37.227705 1118408 preload.go:174] Found /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1005 21:00:37.227715 1118408 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on containerd
	I1005 21:00:37.228096 1118408 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/config.json ...
	I1005 21:00:37.228122 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/config.json: {Name:mkf3266e30624d753f83a833e37134b9aadd9fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:00:37.245111 1118408 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:00:37.245193 1118408 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:00:37.245211 1118408 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 21:00:37.245216 1118408 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 21:00:37.245223 1118408 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:00:37.245228 1118408 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from local cache
	I1005 21:00:53.153479 1118408 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from cached tarball
	I1005 21:00:53.153518 1118408 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:00:53.153570 1118408 start.go:365] acquiring machines lock for addons-223209: {Name:mk0a6c99c13897b18be35158ba2129fcb313a3ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:00:53.154135 1118408 start.go:369] acquired machines lock for "addons-223209" in 537.991µs
	I1005 21:00:53.154174 1118408 start.go:93] Provisioning new machine with config: &{Name:addons-223209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 21:00:53.154269 1118408 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:00:53.156712 1118408 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1005 21:00:53.156983 1118408 start.go:159] libmachine.API.Create for "addons-223209" (driver="docker")
	I1005 21:00:53.157020 1118408 client.go:168] LocalClient.Create starting
	I1005 21:00:53.157144 1118408 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem
	I1005 21:00:54.714405 1118408 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem
	I1005 21:00:55.121915 1118408 cli_runner.go:164] Run: docker network inspect addons-223209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:00:55.144839 1118408 cli_runner.go:211] docker network inspect addons-223209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:00:55.144924 1118408 network_create.go:281] running [docker network inspect addons-223209] to gather additional debugging logs...
	I1005 21:00:55.144948 1118408 cli_runner.go:164] Run: docker network inspect addons-223209
	W1005 21:00:55.163003 1118408 cli_runner.go:211] docker network inspect addons-223209 returned with exit code 1
	I1005 21:00:55.163041 1118408 network_create.go:284] error running [docker network inspect addons-223209]: docker network inspect addons-223209: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-223209 not found
	I1005 21:00:55.163102 1118408 network_create.go:286] output of [docker network inspect addons-223209]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-223209 not found
	
	** /stderr **
	I1005 21:00:55.163235 1118408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:00:55.183161 1118408 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000c89300}
	I1005 21:00:55.183202 1118408 network_create.go:124] attempt to create docker network addons-223209 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 21:00:55.183265 1118408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-223209 addons-223209
	I1005 21:00:55.260203 1118408 network_create.go:108] docker network addons-223209 192.168.49.0/24 created
	I1005 21:00:55.260233 1118408 kic.go:117] calculated static IP "192.168.49.2" for the "addons-223209" container
	I1005 21:00:55.260307 1118408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:00:55.277362 1118408 cli_runner.go:164] Run: docker volume create addons-223209 --label name.minikube.sigs.k8s.io=addons-223209 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:00:55.296478 1118408 oci.go:103] Successfully created a docker volume addons-223209
	I1005 21:00:55.296568 1118408 cli_runner.go:164] Run: docker run --rm --name addons-223209-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-223209 --entrypoint /usr/bin/test -v addons-223209:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:00:56.476257 1118408 cli_runner.go:217] Completed: docker run --rm --name addons-223209-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-223209 --entrypoint /usr/bin/test -v addons-223209:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.179632722s)
	I1005 21:00:56.476285 1118408 oci.go:107] Successfully prepared a docker volume addons-223209
	I1005 21:00:56.476310 1118408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:00:56.476329 1118408 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:00:56.476413 1118408 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-223209:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:01:00.674853 1118408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-223209:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.198394173s)
	I1005 21:01:00.674886 1118408 kic.go:199] duration metric: took 4.198553 seconds to extract preloaded images to volume
	W1005 21:01:00.675033 1118408 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:01:00.675185 1118408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:01:00.742475 1118408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-223209 --name addons-223209 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-223209 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-223209 --network addons-223209 --ip 192.168.49.2 --volume addons-223209:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:01:01.105229 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Running}}
	I1005 21:01:01.128836 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:01.157645 1118408 cli_runner.go:164] Run: docker exec addons-223209 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:01:01.246399 1118408 oci.go:144] the created container "addons-223209" has a running status.
	I1005 21:01:01.246425 1118408 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa...
	I1005 21:01:02.294283 1118408 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:01:02.331511 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:02.357146 1118408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:01:02.357171 1118408 kic_runner.go:114] Args: [docker exec --privileged addons-223209 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:01:02.428641 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:02.450897 1118408 machine.go:88] provisioning docker machine ...
	I1005 21:01:02.450935 1118408 ubuntu.go:169] provisioning hostname "addons-223209"
	I1005 21:01:02.451009 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:02.475209 1118408 main.go:141] libmachine: Using SSH client type: native
	I1005 21:01:02.475641 1118408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34008 <nil> <nil>}
	I1005 21:01:02.475658 1118408 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-223209 && echo "addons-223209" | sudo tee /etc/hostname
	I1005 21:01:02.623205 1118408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-223209
	
	I1005 21:01:02.623280 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:02.642355 1118408 main.go:141] libmachine: Using SSH client type: native
	I1005 21:01:02.642771 1118408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34008 <nil> <nil>}
	I1005 21:01:02.642796 1118408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-223209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-223209/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-223209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:01:02.772371 1118408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:01:02.772403 1118408 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1112519/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1112519/.minikube}
	I1005 21:01:02.772425 1118408 ubuntu.go:177] setting up certificates
	I1005 21:01:02.772434 1118408 provision.go:83] configureAuth start
	I1005 21:01:02.772500 1118408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-223209
	I1005 21:01:02.790149 1118408 provision.go:138] copyHostCerts
	I1005 21:01:02.790236 1118408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.pem (1082 bytes)
	I1005 21:01:02.790365 1118408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/cert.pem (1123 bytes)
	I1005 21:01:02.790426 1118408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/key.pem (1675 bytes)
	I1005 21:01:02.790475 1118408 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem org=jenkins.addons-223209 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-223209]
	I1005 21:01:03.152035 1118408 provision.go:172] copyRemoteCerts
	I1005 21:01:03.152133 1118408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:01:03.152184 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.171105 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.269984 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:01:03.298018 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1005 21:01:03.326087 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:01:03.354370 1118408 provision.go:86] duration metric: configureAuth took 581.913688ms
	I1005 21:01:03.354398 1118408 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:01:03.354597 1118408 config.go:182] Loaded profile config "addons-223209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:01:03.354611 1118408 machine.go:91] provisioned docker machine in 903.694075ms
	I1005 21:01:03.354618 1118408 client.go:171] LocalClient.Create took 10.197589343s
	I1005 21:01:03.354638 1118408 start.go:167] duration metric: libmachine.API.Create for "addons-223209" took 10.197658511s
	I1005 21:01:03.354651 1118408 start.go:300] post-start starting for "addons-223209" (driver="docker")
	I1005 21:01:03.354660 1118408 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:01:03.354718 1118408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:01:03.354765 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.372999 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.470210 1118408 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:01:03.474605 1118408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:01:03.474640 1118408 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:01:03.474651 1118408 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:01:03.474658 1118408 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:01:03.474668 1118408 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1112519/.minikube/addons for local assets ...
	I1005 21:01:03.474737 1118408 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1112519/.minikube/files for local assets ...
	I1005 21:01:03.474760 1118408 start.go:303] post-start completed in 120.102758ms
	I1005 21:01:03.475100 1118408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-223209
	I1005 21:01:03.492917 1118408 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/config.json ...
	I1005 21:01:03.493190 1118408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:01:03.493232 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.511922 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.605199 1118408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:01:03.611128 1118408 start.go:128] duration metric: createHost completed in 10.456840901s
	I1005 21:01:03.611149 1118408 start.go:83] releasing machines lock for "addons-223209", held for 10.456996159s
	I1005 21:01:03.611226 1118408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-223209
	I1005 21:01:03.629257 1118408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:01:03.629380 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.629458 1118408 ssh_runner.go:195] Run: cat /version.json
	I1005 21:01:03.629501 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:03.650093 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.665118 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:03.878031 1118408 ssh_runner.go:195] Run: systemctl --version
	I1005 21:01:03.883502 1118408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:01:03.889018 1118408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 21:01:03.920558 1118408 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:01:03.920638 1118408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:01:03.956949 1118408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:01:03.956974 1118408 start.go:469] detecting cgroup driver to use...
	I1005 21:01:03.957005 1118408 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:01:03.957065 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1005 21:01:03.971716 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 21:01:03.985426 1118408 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:01:03.985498 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:01:04.002119 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:01:04.021123 1118408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:01:04.120753 1118408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:01:04.223016 1118408 docker.go:213] disabling docker service ...
	I1005 21:01:04.223116 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:01:04.244909 1118408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:01:04.259634 1118408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:01:04.358411 1118408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:01:04.454973 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:01:04.468547 1118408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:01:04.489563 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1005 21:01:04.501550 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 21:01:04.513986 1118408 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 21:01:04.514082 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 21:01:04.526022 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 21:01:04.537672 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 21:01:04.549376 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 21:01:04.561362 1118408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:01:04.572351 1118408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 21:01:04.583860 1118408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:01:04.594270 1118408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:01:04.605470 1118408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:01:04.693504 1118408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 21:01:04.844434 1118408 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I1005 21:01:04.844590 1118408 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1005 21:01:04.849531 1118408 start.go:537] Will wait 60s for crictl version
	I1005 21:01:04.849644 1118408 ssh_runner.go:195] Run: which crictl
	I1005 21:01:04.854315 1118408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:01:04.897907 1118408 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1005 21:01:04.897994 1118408 ssh_runner.go:195] Run: containerd --version
	I1005 21:01:04.927978 1118408 ssh_runner.go:195] Run: containerd --version
	I1005 21:01:04.968047 1118408 out.go:177] * Preparing Kubernetes v1.28.2 on containerd 1.6.24 ...
	I1005 21:01:04.970607 1118408 cli_runner.go:164] Run: docker network inspect addons-223209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:01:04.988202 1118408 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 21:01:04.993024 1118408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:01:05.009006 1118408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:01:05.009094 1118408 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:01:05.059662 1118408 containerd.go:604] all images are preloaded for containerd runtime.
	I1005 21:01:05.059689 1118408 containerd.go:518] Images already preloaded, skipping extraction
	I1005 21:01:05.059750 1118408 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:01:05.102540 1118408 containerd.go:604] all images are preloaded for containerd runtime.
	I1005 21:01:05.102566 1118408 cache_images.go:84] Images are preloaded, skipping loading
	I1005 21:01:05.102633 1118408 ssh_runner.go:195] Run: sudo crictl info
	I1005 21:01:05.146002 1118408 cni.go:84] Creating CNI manager for ""
	I1005 21:01:05.146028 1118408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:01:05.146059 1118408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:01:05.146079 1118408 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-223209 NodeName:addons-223209 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 21:01:05.146214 1118408 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-223209"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:01:05.146291 1118408 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-223209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:01:05.146367 1118408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 21:01:05.158327 1118408 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:01:05.158416 1118408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:01:05.169757 1118408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1005 21:01:05.191831 1118408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 21:01:05.213599 1118408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1005 21:01:05.234866 1118408 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:01:05.239368 1118408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:01:05.252840 1118408 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209 for IP: 192.168.49.2
	I1005 21:01:05.252870 1118408 certs.go:190] acquiring lock for shared ca certs: {Name:mkf0b25ffbb252c0d3d05e76f2fd0942f3acc421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:05.253006 1118408 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key
	I1005 21:01:05.462536 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt ...
	I1005 21:01:05.462569 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt: {Name:mk59ad5af18c1957a1db1754f40aab717d69629f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:05.463146 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key ...
	I1005 21:01:05.463163 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key: {Name:mk819a95ec4daa166ffab18d1a533d72044e25b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:05.463258 1118408 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key
	I1005 21:01:06.088788 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt ...
	I1005 21:01:06.088824 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt: {Name:mk4a245c6fd8fe7e8d5596a403fd1394a84fb238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.089020 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key ...
	I1005 21:01:06.089034 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key: {Name:mk8bd30f29e81a733aa84014449b7f2a9f5439d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.089661 1118408 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.key
	I1005 21:01:06.089681 1118408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt with IP's: []
	I1005 21:01:06.644963 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt ...
	I1005 21:01:06.644994 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: {Name:mk33908b56a3fba0a4f5f6165ee76f8f0b8c55f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.645628 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.key ...
	I1005 21:01:06.645644 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.key: {Name:mk6363ec5fc3f67b793f468d057224efc1831281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.645732 1118408 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2
	I1005 21:01:06.645750 1118408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 21:01:06.938798 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2 ...
	I1005 21:01:06.938830 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2: {Name:mke1fcf1c35a86f9eb294b89b16cb7efa5018505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.939012 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2 ...
	I1005 21:01:06.939025 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2: {Name:mk751ae2204d770bc6bdddd2ae20ed01a62e0e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:06.939611 1118408 certs.go:337] copying /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt
	I1005 21:01:06.939693 1118408 certs.go:341] copying /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key
	I1005 21:01:06.939744 1118408 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key
	I1005 21:01:06.939763 1118408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt with IP's: []
	I1005 21:01:07.473102 1118408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt ...
	I1005 21:01:07.473136 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt: {Name:mk34a90bb6929e42abf5c72755987f5b87e923e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:07.473851 1118408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key ...
	I1005 21:01:07.473875 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key: {Name:mk87ae58d2de8029c6830915bb71bc16bb867266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:07.474498 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:01:07.474907 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:01:07.474984 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:01:07.475026 1118408 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem (1675 bytes)
	I1005 21:01:07.476014 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:01:07.507745 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 21:01:07.537700 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:01:07.566810 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1005 21:01:07.595809 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:01:07.624205 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 21:01:07.652008 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:01:07.679606 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:01:07.707586 1118408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:01:07.735876 1118408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:01:07.756222 1118408 ssh_runner.go:195] Run: openssl version
	I1005 21:01:07.763418 1118408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:01:07.774840 1118408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:01:07.779579 1118408 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:01:07.779665 1118408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:01:07.788001 1118408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:01:07.799592 1118408 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:01:07.803956 1118408 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:01:07.804005 1118408 kubeadm.go:404] StartCluster: {Name:addons-223209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-223209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:01:07.804084 1118408 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1005 21:01:07.804141 1118408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:01:07.847520 1118408 cri.go:89] found id: ""
	I1005 21:01:07.847621 1118408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:01:07.858170 1118408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:01:07.868889 1118408 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 21:01:07.868982 1118408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:01:07.879730 1118408 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:01:07.879774 1118408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 21:01:07.931385 1118408 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 21:01:07.931605 1118408 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 21:01:07.977114 1118408 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:01:07.977246 1118408 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:01:07.977306 1118408 kubeadm.go:322] OS: Linux
	I1005 21:01:07.977377 1118408 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 21:01:07.977457 1118408 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 21:01:07.977535 1118408 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 21:01:07.977610 1118408 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 21:01:07.977685 1118408 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 21:01:07.977763 1118408 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 21:01:07.977836 1118408 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1005 21:01:07.977910 1118408 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1005 21:01:07.977982 1118408 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1005 21:01:08.065799 1118408 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:01:08.065965 1118408 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:01:08.066094 1118408 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:01:08.324870 1118408 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:01:08.329444 1118408 out.go:204]   - Generating certificates and keys ...
	I1005 21:01:08.329628 1118408 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 21:01:08.329694 1118408 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 21:01:08.640101 1118408 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:01:08.977292 1118408 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:01:09.629905 1118408 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 21:01:09.836211 1118408 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 21:01:10.188692 1118408 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 21:01:10.189084 1118408 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-223209 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:01:10.630334 1118408 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 21:01:10.630711 1118408 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-223209 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:01:10.934508 1118408 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:01:11.207192 1118408 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:01:11.683150 1118408 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 21:01:11.683450 1118408 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:01:12.196784 1118408 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:01:12.767271 1118408 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:01:13.468096 1118408 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:01:14.142981 1118408 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:01:14.143852 1118408 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:01:14.146665 1118408 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:01:14.149492 1118408 out.go:204]   - Booting up control plane ...
	I1005 21:01:14.149633 1118408 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:01:14.149707 1118408 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:01:14.150177 1118408 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:01:14.164883 1118408 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:01:14.165692 1118408 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:01:14.166024 1118408 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 21:01:14.271767 1118408 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:01:22.279875 1118408 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.007244 seconds
	I1005 21:01:22.280489 1118408 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:01:22.297535 1118408 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:01:22.826977 1118408 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:01:22.827186 1118408 kubeadm.go:322] [mark-control-plane] Marking the node addons-223209 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 21:01:23.340662 1118408 kubeadm.go:322] [bootstrap-token] Using token: 0g8b6d.se6cruugf1au57p3
	I1005 21:01:23.343075 1118408 out.go:204]   - Configuring RBAC rules ...
	I1005 21:01:23.343195 1118408 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:01:23.349269 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:01:23.361040 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:01:23.365127 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:01:23.371850 1118408 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:01:23.377366 1118408 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:01:23.393413 1118408 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:01:23.649105 1118408 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 21:01:23.762350 1118408 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 21:01:23.765083 1118408 kubeadm.go:322] 
	I1005 21:01:23.765153 1118408 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 21:01:23.765160 1118408 kubeadm.go:322] 
	I1005 21:01:23.765232 1118408 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 21:01:23.765237 1118408 kubeadm.go:322] 
	I1005 21:01:23.765261 1118408 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 21:01:23.765316 1118408 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:01:23.765364 1118408 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:01:23.765369 1118408 kubeadm.go:322] 
	I1005 21:01:23.765419 1118408 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 21:01:23.765424 1118408 kubeadm.go:322] 
	I1005 21:01:23.765468 1118408 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 21:01:23.765473 1118408 kubeadm.go:322] 
	I1005 21:01:23.765522 1118408 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 21:01:23.765591 1118408 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:01:23.765655 1118408 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:01:23.765660 1118408 kubeadm.go:322] 
	I1005 21:01:23.766026 1118408 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:01:23.766170 1118408 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 21:01:23.766193 1118408 kubeadm.go:322] 
	I1005 21:01:23.766327 1118408 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0g8b6d.se6cruugf1au57p3 \
	I1005 21:01:23.766578 1118408 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:571092fde05632971def08ad2a457b2fd089790ef449e849065ad5827b1ed47e \
	I1005 21:01:23.766606 1118408 kubeadm.go:322] 	--control-plane 
	I1005 21:01:23.766616 1118408 kubeadm.go:322] 
	I1005 21:01:23.766703 1118408 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:01:23.766707 1118408 kubeadm.go:322] 
	I1005 21:01:23.766791 1118408 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0g8b6d.se6cruugf1au57p3 \
	I1005 21:01:23.766895 1118408 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:571092fde05632971def08ad2a457b2fd089790ef449e849065ad5827b1ed47e 
	I1005 21:01:23.769659 1118408 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:01:23.769769 1118408 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:01:23.769783 1118408 cni.go:84] Creating CNI manager for ""
	I1005 21:01:23.769790 1118408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:01:23.772274 1118408 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:01:23.774187 1118408 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:01:23.780468 1118408 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 21:01:23.780485 1118408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:01:23.809024 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:01:24.765978 1118408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:01:24.766113 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:24.766205 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=addons-223209 minikube.k8s.io/updated_at=2023_10_05T21_01_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:24.914428 1118408 ops.go:34] apiserver oom_adj: -16
	I1005 21:01:24.914514 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:25.061931 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:25.689978 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:26.189907 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:26.690408 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:27.190473 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:27.690489 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:28.189613 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:28.689644 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:29.189944 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:29.689530 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:30.190418 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:30.689622 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:31.190210 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:31.689964 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:32.189603 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:32.690592 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:33.190011 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:33.689801 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:34.190399 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:34.690136 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:35.189618 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:35.690027 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:36.190494 1118408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:01:36.359181 1118408 kubeadm.go:1081] duration metric: took 11.593113966s to wait for elevateKubeSystemPrivileges.
	I1005 21:01:36.359206 1118408 kubeadm.go:406] StartCluster complete in 28.555205013s
	I1005 21:01:36.359223 1118408 settings.go:142] acquiring lock: {Name:mk8ac06a875c8ddea9ee6a3c248c409c1d3f301d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:36.359714 1118408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:01:36.360108 1118408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/kubeconfig: {Name:mk4151b883e566a83b3cbe0bf9e01957efa61f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:01:36.362433 1118408 config.go:182] Loaded profile config "addons-223209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:01:36.362478 1118408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:01:36.362682 1118408 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1005 21:01:36.362823 1118408 addons.go:69] Setting volumesnapshots=true in profile "addons-223209"
	I1005 21:01:36.362838 1118408 addons.go:231] Setting addon volumesnapshots=true in "addons-223209"
	I1005 21:01:36.362875 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.363362 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.364104 1118408 addons.go:69] Setting ingress-dns=true in profile "addons-223209"
	I1005 21:01:36.364172 1118408 addons.go:231] Setting addon ingress-dns=true in "addons-223209"
	I1005 21:01:36.364255 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.364726 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.365083 1118408 addons.go:69] Setting inspektor-gadget=true in profile "addons-223209"
	I1005 21:01:36.365102 1118408 addons.go:231] Setting addon inspektor-gadget=true in "addons-223209"
	I1005 21:01:36.365133 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.365569 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.365731 1118408 addons.go:69] Setting cloud-spanner=true in profile "addons-223209"
	I1005 21:01:36.365761 1118408 addons.go:231] Setting addon cloud-spanner=true in "addons-223209"
	I1005 21:01:36.365803 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.366197 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.366505 1118408 addons.go:69] Setting metrics-server=true in profile "addons-223209"
	I1005 21:01:36.366524 1118408 addons.go:231] Setting addon metrics-server=true in "addons-223209"
	I1005 21:01:36.366554 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.366937 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.370738 1118408 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-223209"
	I1005 21:01:36.370803 1118408 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-223209"
	I1005 21:01:36.370844 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.371355 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.371872 1118408 addons.go:69] Setting registry=true in profile "addons-223209"
	I1005 21:01:36.371891 1118408 addons.go:231] Setting addon registry=true in "addons-223209"
	I1005 21:01:36.371923 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.372315 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.387219 1118408 addons.go:69] Setting default-storageclass=true in profile "addons-223209"
	I1005 21:01:36.387252 1118408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-223209"
	I1005 21:01:36.387566 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.391232 1118408 addons.go:69] Setting storage-provisioner=true in profile "addons-223209"
	I1005 21:01:36.391265 1118408 addons.go:231] Setting addon storage-provisioner=true in "addons-223209"
	I1005 21:01:36.391311 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.391756 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.407148 1118408 addons.go:69] Setting gcp-auth=true in profile "addons-223209"
	I1005 21:01:36.407188 1118408 mustload.go:65] Loading cluster: addons-223209
	I1005 21:01:36.407401 1118408 config.go:182] Loaded profile config "addons-223209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:01:36.407649 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.407789 1118408 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-223209"
	I1005 21:01:36.407803 1118408 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-223209"
	I1005 21:01:36.408038 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.426208 1118408 addons.go:69] Setting ingress=true in profile "addons-223209"
	I1005 21:01:36.426242 1118408 addons.go:231] Setting addon ingress=true in "addons-223209"
	I1005 21:01:36.426297 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.426745 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.577490 1118408 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1005 21:01:36.591855 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1005 21:01:36.591873 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1005 21:01:36.591931 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.612377 1118408 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1005 21:01:36.619825 1118408 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1005 21:01:36.619902 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1005 21:01:36.620131 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.629790 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1005 21:01:36.641210 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1005 21:01:36.643612 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1005 21:01:36.647175 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1005 21:01:36.650031 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1005 21:01:36.652805 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1005 21:01:36.655021 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1005 21:01:36.655517 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.655529 1118408 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1005 21:01:36.663147 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 21:01:36.663177 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 21:01:36.663244 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.658502 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1005 21:01:36.665601 1118408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1005 21:01:36.668371 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1005 21:01:36.668388 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1005 21:01:36.668451 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.693609 1118408 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-223209"
	I1005 21:01:36.693651 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.694083 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.655633 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1005 21:01:36.698472 1118408 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 21:01:36.698493 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1005 21:01:36.698580 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.713203 1118408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 21:01:36.655624 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1005 21:01:36.720413 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1005 21:01:36.720498 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.738927 1118408 addons.go:231] Setting addon default-storageclass=true in "addons-223209"
	I1005 21:01:36.738967 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:36.739457 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:36.750722 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1005 21:01:36.744731 1118408 out.go:177]   - Using image docker.io/registry:2.8.1
	I1005 21:01:36.744772 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.745936 1118408 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-223209" context rescaled to 1 replicas
	I1005 21:01:36.755210 1118408 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 21:01:36.757912 1118408 out.go:177] * Verifying Kubernetes components...
	I1005 21:01:36.763415 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.765378 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:01:36.767615 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:01:36.765564 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:01:36.773659 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:01:36.771686 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1005 21:01:36.771928 1118408 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 21:01:36.776907 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1005 21:01:36.776978 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.777228 1118408 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:01:36.777245 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 21:01:36.777293 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.798098 1118408 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1005 21:01:36.798120 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1005 21:01:36.798193 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.833389 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.874458 1118408 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1005 21:01:36.877390 1118408 out.go:177]   - Using image docker.io/busybox:stable
	I1005 21:01:36.882973 1118408 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 21:01:36.882994 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1005 21:01:36.883072 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.881850 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.920172 1118408 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 21:01:36.920200 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 21:01:36.920270 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:36.934856 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:36.981030 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.008765 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.029710 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.030795 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.038002 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.053603 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:37.337613 1118408 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1005 21:01:37.337686 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1005 21:01:37.508067 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1005 21:01:37.568335 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 21:01:37.595932 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 21:01:37.596005 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1005 21:01:37.598971 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1005 21:01:37.599036 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1005 21:01:37.616986 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 21:01:37.682349 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1005 21:01:37.682374 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1005 21:01:37.683595 1118408 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1005 21:01:37.683652 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1005 21:01:37.685716 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 21:01:37.806148 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:01:37.822670 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 21:01:37.822742 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 21:01:37.858693 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1005 21:01:37.858853 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1005 21:01:37.871263 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 21:01:37.908368 1118408 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1005 21:01:37.908438 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1005 21:01:37.947232 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1005 21:01:37.947309 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1005 21:01:37.980680 1118408 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1005 21:01:37.980750 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1005 21:01:38.098365 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1005 21:01:38.098393 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1005 21:01:38.144679 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1005 21:01:38.144707 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1005 21:01:38.187017 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1005 21:01:38.187042 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1005 21:01:38.221602 1118408 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1005 21:01:38.221624 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1005 21:01:38.236159 1118408 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 21:01:38.236180 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 21:01:38.317937 1118408 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:01:38.318021 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1005 21:01:38.349310 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1005 21:01:38.349378 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1005 21:01:38.355301 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1005 21:01:38.355378 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1005 21:01:38.429199 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1005 21:01:38.456430 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 21:01:38.506280 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1005 21:01:38.506357 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1005 21:01:38.548052 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:01:38.612907 1118408 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1005 21:01:38.612934 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1005 21:01:38.670985 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1005 21:01:38.671185 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1005 21:01:38.809676 1118408 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.037722259s)
	I1005 21:01:38.809913 1118408 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.096683144s)
	I1005 21:01:38.809955 1118408 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 21:01:38.810873 1118408 node_ready.go:35] waiting up to 6m0s for node "addons-223209" to be "Ready" ...
	I1005 21:01:38.814615 1118408 node_ready.go:49] node "addons-223209" has status "Ready":"True"
	I1005 21:01:38.814683 1118408 node_ready.go:38] duration metric: took 3.745886ms waiting for node "addons-223209" to be "Ready" ...
	I1005 21:01:38.814709 1118408 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:01:38.823906 1118408 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace to be "Ready" ...
	I1005 21:01:38.913863 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1005 21:01:38.913890 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1005 21:01:38.923751 1118408 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 21:01:38.923819 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1005 21:01:39.048657 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 21:01:39.129356 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1005 21:01:39.129386 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1005 21:01:39.312699 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1005 21:01:39.312732 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1005 21:01:39.535789 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1005 21:01:39.535814 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1005 21:01:39.712824 1118408 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 21:01:39.712897 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1005 21:01:39.923734 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 21:01:40.055378 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.547224323s)
	I1005 21:01:40.844978 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:41.328705 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.642928418s)
	I1005 21:01:41.328804 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.711597416s)
	I1005 21:01:41.328896 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.760487327s)
	W1005 21:01:41.346675 1118408 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1005 21:01:41.417412 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.611183244s)
	I1005 21:01:42.848310 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:43.336903 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.465561281s)
	I1005 21:01:43.337008 1118408 addons.go:467] Verifying addon ingress=true in "addons-223209"
	I1005 21:01:43.339331 1118408 out.go:177] * Verifying ingress addon...
	I1005 21:01:43.337214 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.907942357s)
	I1005 21:01:43.337275 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.880771223s)
	I1005 21:01:43.337354 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.789229164s)
	I1005 21:01:43.337412 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.288680993s)
	I1005 21:01:43.342821 1118408 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1005 21:01:43.339456 1118408 addons.go:467] Verifying addon registry=true in "addons-223209"
	I1005 21:01:43.339470 1118408 addons.go:467] Verifying addon metrics-server=true in "addons-223209"
	W1005 21:01:43.339500 1118408 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 21:01:43.345261 1118408 out.go:177] * Verifying registry addon...
	I1005 21:01:43.347949 1118408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1005 21:01:43.345427 1118408 retry.go:31] will retry after 161.926384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 21:01:43.349132 1118408 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1005 21:01:43.349147 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:43.356943 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:43.358294 1118408 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 21:01:43.358352 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:43.370646 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:43.479642 1118408 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1005 21:01:43.479718 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:43.510098 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:01:43.524479 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:43.867952 1118408 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1005 21:01:43.875254 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:43.889488 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:44.132753 1118408 addons.go:231] Setting addon gcp-auth=true in "addons-223209"
	I1005 21:01:44.132809 1118408 host.go:66] Checking if "addons-223209" exists ...
	I1005 21:01:44.133332 1118408 cli_runner.go:164] Run: docker container inspect addons-223209 --format={{.State.Status}}
	I1005 21:01:44.164985 1118408 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1005 21:01:44.165063 1118408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-223209
	I1005 21:01:44.214398 1118408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34008 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/addons-223209/id_rsa Username:docker}
	I1005 21:01:44.386777 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:44.397814 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:44.865081 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:44.886088 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:45.384678 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:45.398846 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:45.400439 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:45.485213 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.561372523s)
	I1005 21:01:45.485264 1118408 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-223209"
	I1005 21:01:45.487857 1118408 out.go:177] * Verifying csi-hostpath-driver addon...
	I1005 21:01:45.490930 1118408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1005 21:01:45.502098 1118408 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 21:01:45.502175 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:45.510223 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:45.596461 1118408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.086310618s)
	I1005 21:01:45.596538 1118408 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.431530888s)
	I1005 21:01:45.599708 1118408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:01:45.601672 1118408 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1005 21:01:45.603886 1118408 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1005 21:01:45.603915 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1005 21:01:45.633653 1118408 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1005 21:01:45.633684 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1005 21:01:45.667101 1118408 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 21:01:45.667127 1118408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1005 21:01:45.698857 1118408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 21:01:45.862823 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:45.876574 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:46.018262 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:46.361511 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:46.378050 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:46.523758 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:46.601894 1118408 addons.go:467] Verifying addon gcp-auth=true in "addons-223209"
	I1005 21:01:46.604596 1118408 out.go:177] * Verifying gcp-auth addon...
	I1005 21:01:46.607809 1118408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1005 21:01:46.611215 1118408 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1005 21:01:46.611284 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:46.614058 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:46.861807 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:46.875839 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:47.016940 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:47.118867 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:47.362721 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:47.375196 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:47.516436 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:47.617852 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:47.843209 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:47.862007 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:47.875732 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:48.016978 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:48.118036 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:48.362143 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:48.376394 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:48.516673 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:48.618678 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:48.862067 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:48.876185 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:49.016508 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:49.126863 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:49.362678 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:49.376674 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:49.516582 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:49.617775 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:49.843887 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:49.862633 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:49.876311 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:50.018365 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:50.118860 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:50.363529 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:50.376799 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:50.517163 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:50.618035 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:50.862040 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:50.876066 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:51.017991 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:51.118151 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:51.369365 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:51.380458 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:51.516942 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:51.618971 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:51.844360 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:51.863080 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:51.875962 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:52.016896 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:52.118294 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:52.361324 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:52.376085 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:52.517191 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:52.621066 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:52.865046 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:52.876029 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:53.017585 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:53.118639 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:53.362441 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:53.376705 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:53.516963 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:53.617654 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:53.844711 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:53.865650 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:53.875729 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:54.017555 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:54.118743 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:54.362287 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:54.375772 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:54.516467 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:54.618397 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:54.861689 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:54.876251 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:55.017440 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:55.118553 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:55.361726 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:55.375682 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:55.516524 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:55.618504 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:55.844883 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:55.861202 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:55.877463 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:56.016840 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:56.118352 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:56.361770 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:56.375222 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:56.516591 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:56.618808 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:56.862977 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:56.876095 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:57.017599 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:57.118831 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:57.362394 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:57.375784 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:57.515914 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:57.618677 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:57.863216 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:57.881856 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:58.017169 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:58.117517 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:58.343180 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:01:58.371362 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:58.375434 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:58.515532 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:58.617838 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:58.861628 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:58.875138 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:59.015965 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:59.118179 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:59.361940 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:59.375717 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:01:59.516570 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:01:59.617807 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:01:59.861184 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:01:59.875777 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:00.017140 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:00.120021 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:00.361259 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:00.375714 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:00.516056 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:00.617864 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:00.843286 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:00.861261 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:00.875672 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:01.016945 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:01.118409 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:01.361039 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:01.375536 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:01.516951 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:01.618287 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:01.861723 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:01.875361 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:02.016443 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:02.118072 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:02.362325 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:02.376049 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:02.516613 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:02.618002 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:02.861939 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:02.878982 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:03.023104 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:03.120603 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:03.343711 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:03.362046 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:03.375675 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:03.516162 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:03.618460 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:03.862364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:03.876489 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:04.016812 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:04.118276 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:04.362447 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:04.376247 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:04.516154 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:04.618654 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:04.861882 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:04.877238 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:05.016574 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:05.120849 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:05.362533 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:05.376711 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:05.516968 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:05.618252 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:05.843987 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:05.861541 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:05.876533 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:06.016783 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:06.118192 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:06.361469 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:06.376105 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:06.516355 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:06.618659 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:06.861276 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:06.875644 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:07.016266 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:07.118036 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:07.362331 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:07.375989 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:07.516745 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:07.617983 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:07.862392 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:07.878707 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:08.016585 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:08.117883 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:08.343355 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:08.361845 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:08.375470 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:08.516378 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:08.617913 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:08.861503 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:08.876436 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:09.016874 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:09.118426 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:09.361363 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:09.376105 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:09.516158 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:09.618020 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:09.862983 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:09.875998 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:10.017642 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:10.118977 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:10.344181 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:10.361451 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:10.376024 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:10.515645 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:10.618089 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:10.861753 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:10.875609 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:11.015956 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:11.118620 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:11.361600 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:11.375310 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:11.515932 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:11.618600 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:11.862306 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:11.875932 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:12.021264 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:12.119411 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:12.362191 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:12.375888 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:12.526034 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:12.617653 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:12.843343 1118408 pod_ready.go:102] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"False"
	I1005 21:02:12.861757 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:12.879250 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:13.016091 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:13.118084 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:13.342801 1118408 pod_ready.go:92] pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.342827 1118408 pod_ready.go:81] duration metric: took 34.518843235s waiting for pod "coredns-5dd5756b68-gltv9" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.342839 1118408 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.348214 1118408 pod_ready.go:92] pod "etcd-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.348247 1118408 pod_ready.go:81] duration metric: took 5.399006ms waiting for pod "etcd-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.348262 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.354694 1118408 pod_ready.go:92] pod "kube-apiserver-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.354720 1118408 pod_ready.go:81] duration metric: took 6.448625ms waiting for pod "kube-apiserver-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.354732 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.361131 1118408 pod_ready.go:92] pod "kube-controller-manager-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.361160 1118408 pod_ready.go:81] duration metric: took 6.417372ms waiting for pod "kube-controller-manager-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.361173 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gksxp" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.363343 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:13.368479 1118408 pod_ready.go:92] pod "kube-proxy-gksxp" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.368504 1118408 pod_ready.go:81] duration metric: took 7.323459ms waiting for pod "kube-proxy-gksxp" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.368515 1118408 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.375863 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:13.516556 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:13.618096 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:13.740725 1118408 pod_ready.go:92] pod "kube-scheduler-addons-223209" in "kube-system" namespace has status "Ready":"True"
	I1005 21:02:13.740750 1118408 pod_ready.go:81] duration metric: took 372.227395ms waiting for pod "kube-scheduler-addons-223209" in "kube-system" namespace to be "Ready" ...
	I1005 21:02:13.740761 1118408 pod_ready.go:38] duration metric: took 34.926028728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:02:13.740776 1118408 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:02:13.740838 1118408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:02:13.758113 1118408 api_server.go:72] duration metric: took 37.002858259s to wait for apiserver process to appear ...
	I1005 21:02:13.758138 1118408 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:02:13.758156 1118408 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 21:02:13.767317 1118408 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 21:02:13.768702 1118408 api_server.go:141] control plane version: v1.28.2
	I1005 21:02:13.768727 1118408 api_server.go:131] duration metric: took 10.58248ms to wait for apiserver health ...
	I1005 21:02:13.768736 1118408 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:02:13.862954 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:13.876318 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:13.947411 1118408 system_pods.go:59] 17 kube-system pods found
	I1005 21:02:13.947501 1118408 system_pods.go:61] "coredns-5dd5756b68-gltv9" [ded08413-2f7f-4fe4-8721-39eeaa369647] Running
	I1005 21:02:13.947527 1118408 system_pods.go:61] "csi-hostpath-attacher-0" [da87d806-c98a-436e-bd49-6aab2c6f317f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1005 21:02:13.947571 1118408 system_pods.go:61] "csi-hostpath-resizer-0" [ae0bb96c-13d5-4693-98db-a7f1d70ac2e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 21:02:13.947600 1118408 system_pods.go:61] "csi-hostpathplugin-pb6dg" [7823a263-b219-48db-9627-d2acfd754511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 21:02:13.947640 1118408 system_pods.go:61] "etcd-addons-223209" [edb40954-f17f-44a4-ad0c-c9048adcc8e5] Running
	I1005 21:02:13.947664 1118408 system_pods.go:61] "kindnet-t76t7" [052f693f-6a4f-4a65-ac52-0954ba7c723f] Running
	I1005 21:02:13.947685 1118408 system_pods.go:61] "kube-apiserver-addons-223209" [65ef5976-9ddf-47a5-8133-fabf3a8f8bbb] Running
	I1005 21:02:13.947722 1118408 system_pods.go:61] "kube-controller-manager-addons-223209" [b7fb6331-91d5-491d-ad96-798d169e4cda] Running
	I1005 21:02:13.947748 1118408 system_pods.go:61] "kube-ingress-dns-minikube" [2ce60d3d-8d03-433c-ab3c-d8d49e618785] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 21:02:13.947767 1118408 system_pods.go:61] "kube-proxy-gksxp" [401a617c-61e8-4dca-9fe7-1967c4c7bea9] Running
	I1005 21:02:13.947802 1118408 system_pods.go:61] "kube-scheduler-addons-223209" [a5506554-6773-48ca-99c3-4905e3b1f18b] Running
	I1005 21:02:13.947826 1118408 system_pods.go:61] "metrics-server-7c66d45ddc-sfsm4" [e1e3a8e3-0927-46e4-b6db-53c5e662952e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 21:02:13.947848 1118408 system_pods.go:61] "registry-8687b" [295664eb-0493-448c-865b-3496e891de88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1005 21:02:13.947886 1118408 system_pods.go:61] "registry-proxy-gw7w5" [6fd70213-4315-40d6-b46c-96d44c97c78a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 21:02:13.947915 1118408 system_pods.go:61] "snapshot-controller-58dbcc7b99-4zqqz" [8876d792-1c61-4945-a509-e8406bb689b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:13.947938 1118408 system_pods.go:61] "snapshot-controller-58dbcc7b99-ln2f7" [abeec72e-8987-44c8-a351-0b5eabfdb781] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:13.947972 1118408 system_pods.go:61] "storage-provisioner" [5329506e-b2cb-42d6-9999-04091a5ddda2] Running
	I1005 21:02:13.947997 1118408 system_pods.go:74] duration metric: took 179.255093ms to wait for pod list to return data ...
	I1005 21:02:13.948018 1118408 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:02:14.016400 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:14.117685 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:14.140182 1118408 default_sa.go:45] found service account: "default"
	I1005 21:02:14.140244 1118408 default_sa.go:55] duration metric: took 192.193237ms for default service account to be created ...
	I1005 21:02:14.140282 1118408 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:02:14.347541 1118408 system_pods.go:86] 17 kube-system pods found
	I1005 21:02:14.347613 1118408 system_pods.go:89] "coredns-5dd5756b68-gltv9" [ded08413-2f7f-4fe4-8721-39eeaa369647] Running
	I1005 21:02:14.347639 1118408 system_pods.go:89] "csi-hostpath-attacher-0" [da87d806-c98a-436e-bd49-6aab2c6f317f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1005 21:02:14.347664 1118408 system_pods.go:89] "csi-hostpath-resizer-0" [ae0bb96c-13d5-4693-98db-a7f1d70ac2e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 21:02:14.347704 1118408 system_pods.go:89] "csi-hostpathplugin-pb6dg" [7823a263-b219-48db-9627-d2acfd754511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 21:02:14.347724 1118408 system_pods.go:89] "etcd-addons-223209" [edb40954-f17f-44a4-ad0c-c9048adcc8e5] Running
	I1005 21:02:14.347746 1118408 system_pods.go:89] "kindnet-t76t7" [052f693f-6a4f-4a65-ac52-0954ba7c723f] Running
	I1005 21:02:14.347776 1118408 system_pods.go:89] "kube-apiserver-addons-223209" [65ef5976-9ddf-47a5-8133-fabf3a8f8bbb] Running
	I1005 21:02:14.347798 1118408 system_pods.go:89] "kube-controller-manager-addons-223209" [b7fb6331-91d5-491d-ad96-798d169e4cda] Running
	I1005 21:02:14.347820 1118408 system_pods.go:89] "kube-ingress-dns-minikube" [2ce60d3d-8d03-433c-ab3c-d8d49e618785] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 21:02:14.347840 1118408 system_pods.go:89] "kube-proxy-gksxp" [401a617c-61e8-4dca-9fe7-1967c4c7bea9] Running
	I1005 21:02:14.347860 1118408 system_pods.go:89] "kube-scheduler-addons-223209" [a5506554-6773-48ca-99c3-4905e3b1f18b] Running
	I1005 21:02:14.347895 1118408 system_pods.go:89] "metrics-server-7c66d45ddc-sfsm4" [e1e3a8e3-0927-46e4-b6db-53c5e662952e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 21:02:14.347917 1118408 system_pods.go:89] "registry-8687b" [295664eb-0493-448c-865b-3496e891de88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1005 21:02:14.347938 1118408 system_pods.go:89] "registry-proxy-gw7w5" [6fd70213-4315-40d6-b46c-96d44c97c78a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 21:02:14.347974 1118408 system_pods.go:89] "snapshot-controller-58dbcc7b99-4zqqz" [8876d792-1c61-4945-a509-e8406bb689b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:14.348000 1118408 system_pods.go:89] "snapshot-controller-58dbcc7b99-ln2f7" [abeec72e-8987-44c8-a351-0b5eabfdb781] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:02:14.348019 1118408 system_pods.go:89] "storage-provisioner" [5329506e-b2cb-42d6-9999-04091a5ddda2] Running
	I1005 21:02:14.348044 1118408 system_pods.go:126] duration metric: took 207.735159ms to wait for k8s-apps to be running ...
	I1005 21:02:14.348074 1118408 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:02:14.348185 1118408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:02:14.361910 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:14.368367 1118408 system_svc.go:56] duration metric: took 20.283725ms WaitForService to wait for kubelet.
	I1005 21:02:14.368442 1118408 kubeadm.go:581] duration metric: took 37.613193211s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:02:14.368476 1118408 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:02:14.376477 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:14.517633 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:14.540601 1118408 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:02:14.540687 1118408 node_conditions.go:123] node cpu capacity is 2
	I1005 21:02:14.540714 1118408 node_conditions.go:105] duration metric: took 172.204781ms to run NodePressure ...
	I1005 21:02:14.540740 1118408 start.go:228] waiting for startup goroutines ...
	I1005 21:02:14.618343 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:14.862431 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:14.877156 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:15.021829 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:15.119295 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:15.366553 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:15.379668 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:15.516757 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:15.618715 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:15.861781 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:15.876148 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:16.016398 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:16.118542 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:16.362615 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:16.376204 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:16.516652 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:16.618681 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:16.862744 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:16.875397 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:17.017458 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:17.118403 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:17.364333 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:17.377193 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:17.516026 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:17.618123 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:17.861791 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:17.876238 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:18.016542 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:18.118567 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:18.364098 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:18.380246 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:18.516185 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:18.617953 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:18.861623 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:18.876142 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:19.016458 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:19.118264 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:19.377874 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:19.379169 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:19.518068 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:19.617942 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:19.861761 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:19.875834 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:20.017539 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:20.118612 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:20.361931 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:20.375574 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:20.521082 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:20.617579 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:20.862023 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:20.875773 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:21.017784 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:21.118044 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:21.361725 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:21.376296 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:21.516295 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:21.620100 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:21.864234 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:21.875912 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:22.016679 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:22.120068 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:22.363304 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:22.375833 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:22.516730 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:22.618479 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:22.864844 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:22.877177 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:23.016016 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:23.117635 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:23.362291 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:23.375752 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:23.516391 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:23.617999 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:23.861937 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:23.875498 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:24.016165 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:24.118472 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:24.361747 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:24.375689 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:24.516645 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:24.618759 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:24.861774 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:24.875295 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:25.017081 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:25.118461 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:25.361961 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:25.375378 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:25.516687 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:25.618662 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:25.862089 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:25.875689 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:26.016332 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:26.118787 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:26.362183 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:26.377136 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:26.515905 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:26.618806 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:26.862072 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:26.876004 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:27.015946 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:27.117982 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:02:27.367736 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:27.381301 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:27.516149 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:27.617918 1118408 kapi.go:107] duration metric: took 41.010108165s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1005 21:02:27.621087 1118408 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-223209 cluster.
	I1005 21:02:27.623442 1118408 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1005 21:02:27.625508 1118408 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1005 21:02:27.861839 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:27.875351 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:28.015924 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:28.370451 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:28.376200 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:28.516141 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:28.861440 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:28.876249 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:29.016482 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:29.362496 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:29.379185 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:29.515688 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:29.866820 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:29.876015 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:30.030958 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:30.361546 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:30.376507 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:30.516906 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:30.862239 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:30.882986 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:31.016348 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:31.362588 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:31.376617 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:31.516826 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:31.861460 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:31.876292 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:32.016568 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:32.366196 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:32.378614 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:32.516551 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:32.863778 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:32.876144 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:33.016599 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:33.365486 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:33.375241 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:33.515910 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:33.861507 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:33.876388 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:34.016513 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:34.362618 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:34.376626 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:34.516921 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:34.861640 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:34.875016 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:35.019284 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:35.361888 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:35.375635 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:35.516084 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:35.861772 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:35.875582 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:36.017383 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:36.363749 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:36.376439 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:02:36.516029 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:36.861357 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:36.878888 1118408 kapi.go:107] duration metric: took 53.530936503s to wait for kubernetes.io/minikube-addons=registry ...
	I1005 21:02:37.017714 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:37.361346 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:37.515967 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:37.862819 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:38.021497 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:38.362280 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:38.516382 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:38.861359 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:39.016285 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:39.361820 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:39.517301 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:39.865330 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:40.017482 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:40.362232 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:40.516669 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:40.862432 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:41.018431 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:41.363209 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:41.516901 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:41.862041 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:42.022133 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:42.362340 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:42.515875 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:42.861633 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:43.017192 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:43.362061 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:43.517058 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:43.861426 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:44.017305 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:44.366174 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:44.516662 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:44.861566 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:45.020121 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:45.362548 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:45.517209 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:45.862364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:46.016948 1118408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:02:46.362777 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:46.517327 1118408 kapi.go:107] duration metric: took 1m1.026383851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1005 21:02:46.862211 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:47.361620 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:47.861393 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:48.362210 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:48.861804 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:49.361391 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:49.863289 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:50.365832 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:50.861485 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:51.362364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:51.862394 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:52.361782 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:52.864667 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:53.361795 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:53.864043 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:54.362364 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:54.862770 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:55.365448 1118408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:02:55.862398 1118408 kapi.go:107] duration metric: took 1m12.519575187s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1005 21:02:55.864705 1118408 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, volumesnapshots, gcp-auth, registry, csi-hostpath-driver, ingress
	I1005 21:02:55.866741 1118408 addons.go:502] enable addons completed in 1m19.504053325s: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner inspektor-gadget metrics-server volumesnapshots gcp-auth registry csi-hostpath-driver ingress]
	I1005 21:02:55.866792 1118408 start.go:233] waiting for cluster config update ...
	I1005 21:02:55.866813 1118408 start.go:242] writing updated cluster config ...
	I1005 21:02:55.867155 1118408 ssh_runner.go:195] Run: rm -f paused
	I1005 21:02:55.931264 1118408 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:02:55.933845 1118408 out.go:177] * Done! kubectl is now configured to use "addons-223209" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	4ae3e0913de30       71a676dd070f4       5 seconds ago        Exited              registry-test                            0                   48f27b1484b5f       registry-test
	7fecc50607506       fc9db2894f4e4       7 seconds ago        Exited              helper-pod                               0                   95f272394702f       helper-pod-delete-pvc-f6a4555f-aa36-48f9-875a-61866ab03538
	c55231efd178f       fc9db2894f4e4       11 seconds ago       Exited              busybox                                  0                   923c98a6cadd6       test-local-path
	eaadde75a68f0       fc9db2894f4e4       14 seconds ago       Exited              helper-pod                               0                   f36af4067c33a       helper-pod-create-pvc-f6a4555f-aa36-48f9-875a-61866ab03538
	48669877288ad       0fa733f52482a       18 seconds ago       Running             controller                               0                   ecd0d3fc28fcc       ingress-nginx-controller-5c4c674fdc-mvqqv
	cfd3fea543852       645adbf280ba8       23 seconds ago       Exited              cloud-spanner-emulator                   2                   4b4a19b4fbc6b       cloud-spanner-emulator-7d49f968d9-vdzvm
	d4dee01377e51       ee6d597e62dc8       26 seconds ago       Running             csi-snapshotter                          0                   9b0044d9e312d       csi-hostpathplugin-pb6dg
	5cc0465d65611       642ded511e141       28 seconds ago       Running             csi-provisioner                          0                   9b0044d9e312d       csi-hostpathplugin-pb6dg
	919889a962485       922312104da8a       31 seconds ago       Running             liveness-probe                           0                   9b0044d9e312d       csi-hostpathplugin-pb6dg
	240f550102dea       08f6b2990811a       32 seconds ago       Running             hostpath                                 0                   9b0044d9e312d       csi-hostpathplugin-pb6dg
	73962e5dc0d98       0107d56dbc0be       34 seconds ago       Running             node-driver-registrar                    0                   9b0044d9e312d       csi-hostpathplugin-pb6dg
	36e1aff12a8d7       1461903ec4fe9       36 seconds ago       Running             csi-external-health-monitor-controller   0                   9b0044d9e312d       csi-hostpathplugin-pb6dg
	732a561594459       1499ed4fbd0aa       40 seconds ago       Exited              minikube-ingress-dns                     3                   6bda1d7d9fa61       kube-ingress-dns-minikube
	47ccf121fa752       4d1e5c3e97420       41 seconds ago       Running             volume-snapshot-controller               0                   1608bfd04c40e       snapshot-controller-58dbcc7b99-4zqqz
	ba40c0d1610b6       2a5f29343eb03       46 seconds ago       Running             gcp-auth                                 0                   98d942ed7d056       gcp-auth-d4c87556c-52sbn
	ea4f16f92e116       8f2588812ab29       48 seconds ago       Exited              patch                                    0                   181d89372276a       gcp-auth-certs-patch-2fg82
	4c4b2335107b2       487fa743e1e22       48 seconds ago       Running             csi-resizer                              0                   972e0de3d2b01       csi-hostpath-resizer-0
	2f25382f9b0e0       8f2588812ab29       50 seconds ago       Exited              patch                                    0                   226f84619c3c3       ingress-nginx-admission-patch-jzg5h
	9321237174574       9a80d518f102c       50 seconds ago       Running             csi-attacher                             0                   caef2de338898       csi-hostpath-attacher-0
	5184c7daed3c8       4d1e5c3e97420       52 seconds ago       Running             volume-snapshot-controller               0                   29ee4499c6d38       snapshot-controller-58dbcc7b99-ln2f7
	ef98c594d70a5       8f2588812ab29       54 seconds ago       Exited              create                                   0                   7173d9405f75b       ingress-nginx-admission-create-hlwd9
	33aec96dcfdad       7ce2150c8929b       54 seconds ago       Running             local-path-provisioner                   0                   f55bf0dfe7977       local-path-provisioner-78b46b4d5c-m2r65
	9a0d8f29252c1       8f2588812ab29       56 seconds ago       Exited              create                                   0                   dd5133339509f       gcp-auth-certs-create-2wqnk
	e16ee72ea97ef       24087ab2d9047       59 seconds ago       Running             metrics-server                           0                   c18188e0d0053       metrics-server-7c66d45ddc-sfsm4
	b99ce8af0b748       97e04611ad434       About a minute ago   Running             coredns                                  0                   445b65dbeedb1       coredns-5dd5756b68-gltv9
	7bde2c4c91d66       ce39c29f15dbe       About a minute ago   Running             gadget                                   0                   dfa638da60b84       gadget-rtjcf
	9718bda95560a       ba04bb24b9575       About a minute ago   Running             storage-provisioner                      0                   8e161276df44c       storage-provisioner
	19d1a92e02d00       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni                              0                   bdd9f6134c133       kindnet-t76t7
	2a7610d2c6238       7da62c127fc0f       About a minute ago   Running             kube-proxy                               0                   c1af03799f925       kube-proxy-gksxp
	074e3cb411fba       9cdd6470f48c8       About a minute ago   Running             etcd                                     0                   aff4f25b98799       etcd-addons-223209
	695e20ff6ea91       30bb499447fe1       About a minute ago   Running             kube-apiserver                           0                   c39ed38a3aed0       kube-apiserver-addons-223209
	0e7e370fa42eb       64fc40cee3716       About a minute ago   Running             kube-scheduler                           0                   9cceea69816f2       kube-scheduler-addons-223209
	dca30c30326b6       89d57b83c1786       About a minute ago   Running             kube-controller-manager                  0                   dbf3dc8b11352       kube-controller-manager-addons-223209
	
	* 
	* ==> containerd <==
	* Oct 05 21:03:10 addons-223209 containerd[746]: time="2023-10-05T21:03:10.964334051Z" level=info msg="shim disconnected" id=5e07b3d24c3618c864360df439da266ee83e08babe98ad8a3f53b77fff2871d1
	Oct 05 21:03:10 addons-223209 containerd[746]: time="2023-10-05T21:03:10.964582550Z" level=warning msg="cleaning up after shim disconnected" id=5e07b3d24c3618c864360df439da266ee83e08babe98ad8a3f53b77fff2871d1 namespace=k8s.io
	Oct 05 21:03:10 addons-223209 containerd[746]: time="2023-10-05T21:03:10.964655165Z" level=info msg="cleaning up dead shim"
	Oct 05 21:03:10 addons-223209 containerd[746]: time="2023-10-05T21:03:10.994242525Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:03:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7845 runtime=io.containerd.runc.v2\n"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.021101110Z" level=info msg="shim disconnected" id=20e17c6319f81542e2ecbf3cd8416ee865661d76e04d7c56b38bdc55022e4f98
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.021442171Z" level=warning msg="cleaning up after shim disconnected" id=20e17c6319f81542e2ecbf3cd8416ee865661d76e04d7c56b38bdc55022e4f98 namespace=k8s.io
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.021536644Z" level=info msg="cleaning up dead shim"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.043341346Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:03:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7884 runtime=io.containerd.runc.v2\n"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.056266223Z" level=info msg="TearDown network for sandbox \"5e07b3d24c3618c864360df439da266ee83e08babe98ad8a3f53b77fff2871d1\" successfully"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.056449902Z" level=info msg="StopPodSandbox for \"5e07b3d24c3618c864360df439da266ee83e08babe98ad8a3f53b77fff2871d1\" returns successfully"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.150648015Z" level=info msg="TearDown network for sandbox \"20e17c6319f81542e2ecbf3cd8416ee865661d76e04d7c56b38bdc55022e4f98\" successfully"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.150703687Z" level=info msg="StopPodSandbox for \"20e17c6319f81542e2ecbf3cd8416ee865661d76e04d7c56b38bdc55022e4f98\" returns successfully"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.618506453Z" level=info msg="RemoveContainer for \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\""
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.630134385Z" level=info msg="RemoveContainer for \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\" returns successfully"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.644704217Z" level=error msg="ContainerStatus for \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\": not found"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.663032358Z" level=info msg="RemoveContainer for \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\""
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.679438932Z" level=info msg="RemoveContainer for \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\" returns successfully"
	Oct 05 21:03:11 addons-223209 containerd[746]: time="2023-10-05T21:03:11.686315083Z" level=error msg="ContainerStatus for \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\": not found"
	Oct 05 21:03:12 addons-223209 containerd[746]: time="2023-10-05T21:03:12.878751374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:headlamp-58b88cff49-4xgkh,Uid:81c7656f-0f6b-4015-96ed-ed34fc06d207,Namespace:headlamp,Attempt:0,}"
	Oct 05 21:03:12 addons-223209 containerd[746]: time="2023-10-05T21:03:12.948086101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 05 21:03:12 addons-223209 containerd[746]: time="2023-10-05T21:03:12.948161891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 05 21:03:12 addons-223209 containerd[746]: time="2023-10-05T21:03:12.948173854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 05 21:03:12 addons-223209 containerd[746]: time="2023-10-05T21:03:12.948667472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe19f6eeb4a3911661ca953f67fcd06b6a439079628f8513669d1de0fb0b72b7 pid=8134 runtime=io.containerd.runc.v2
	Oct 05 21:03:13 addons-223209 containerd[746]: time="2023-10-05T21:03:13.051906584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:headlamp-58b88cff49-4xgkh,Uid:81c7656f-0f6b-4015-96ed-ed34fc06d207,Namespace:headlamp,Attempt:0,} returns sandbox id \"fe19f6eeb4a3911661ca953f67fcd06b6a439079628f8513669d1de0fb0b72b7\""
	Oct 05 21:03:13 addons-223209 containerd[746]: time="2023-10-05T21:03:13.056722856Z" level=info msg="PullImage \"ghcr.io/headlamp-k8s/headlamp:v0.19.1@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753\""
	
	* 
	* ==> coredns [b99ce8af0b748fd898662415f7aa7e13d1ec51059aba0bbc2ad32a42117ba79b] <==
	* [INFO] 10.244.0.12:48852 - 13413 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002022546s
	[INFO] 10.244.0.12:35240 - 62357 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000912028s
	[INFO] 10.244.0.12:57398 - 48786 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001142337s
	[INFO] 10.244.0.16:50476 - 50010 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198637s
	[INFO] 10.244.0.16:50476 - 50775 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000231031s
	[INFO] 10.244.0.16:60389 - 58186 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121977s
	[INFO] 10.244.0.16:60389 - 28999 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009074s
	[INFO] 10.244.0.16:43848 - 37877 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089419s
	[INFO] 10.244.0.16:43848 - 39163 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096155s
	[INFO] 10.244.0.16:38265 - 24767 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005806946s
	[INFO] 10.244.0.16:38265 - 45245 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00602466s
	[INFO] 10.244.0.16:37626 - 57568 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150194s
	[INFO] 10.244.0.16:37626 - 19426 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150612s
	[INFO] 10.244.0.16:42838 - 49484 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000117587s
	[INFO] 10.244.0.16:42838 - 1360 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038744s
	[INFO] 10.244.0.16:52727 - 36242 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047253s
	[INFO] 10.244.0.16:52727 - 45457 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037621s
	[INFO] 10.244.0.16:59711 - 59900 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075282s
	[INFO] 10.244.0.16:59711 - 16127 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092963s
	[INFO] 10.244.0.16:39880 - 53142 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001157139s
	[INFO] 10.244.0.16:39880 - 58260 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001293302s
	[INFO] 10.244.0.16:43054 - 40081 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061612s
	[INFO] 10.244.0.16:43054 - 49299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084184s
	[INFO] 10.244.0.22:37202 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243921s
	[INFO] 10.244.0.22:35337 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147002s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-223209
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-223209
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=addons-223209
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_01_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-223209
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-223209"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:01:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-223209
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:03:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:02:56 +0000   Thu, 05 Oct 2023 21:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:02:56 +0000   Thu, 05 Oct 2023 21:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:02:56 +0000   Thu, 05 Oct 2023 21:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:02:56 +0000   Thu, 05 Oct 2023 21:01:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-223209
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 d05e31b9cee54500949e5b5b6300f221
	  System UUID:                4060a633-0e01-4f8d-a752-012b1f3e17a0
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-7d49f968d9-vdzvm      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  gadget                      gadget-rtjcf                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  gcp-auth                    gcp-auth-d4c87556c-52sbn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  headlamp                    headlamp-58b88cff49-4xgkh                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  ingress-nginx               ingress-nginx-controller-5c4c674fdc-mvqqv    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         90s
	  kube-system                 coredns-5dd5756b68-gltv9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     97s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 csi-hostpathplugin-pb6dg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 etcd-addons-223209                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-t76t7                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      97s
	  kube-system                 kube-apiserver-addons-223209                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-addons-223209        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-gksxp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-addons-223209                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 metrics-server-7c66d45ddc-sfsm4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 snapshot-controller-58dbcc7b99-4zqqz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 snapshot-controller-58dbcc7b99-ln2f7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  local-path-storage          local-path-provisioner-78b46b4d5c-m2r65      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             510Mi (6%!)(MISSING)   220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node addons-223209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node addons-223209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet          Node addons-223209 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node addons-223209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node addons-223209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node addons-223209 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             110s                 kubelet          Node addons-223209 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                109s                 kubelet          Node addons-223209 status is now: NodeReady
	  Normal  RegisteredNode           98s                  node-controller  Node addons-223209 event: Registered Node addons-223209 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001082] FS-Cache: O-key=[8] '0f613b0000000000'
	[  +0.000687] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000c2b6221a
	[  +0.001022] FS-Cache: N-key=[8] '0f613b0000000000'
	[  +0.002852] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=0000003b [p=00000039 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000946bf243
	[  +0.001102] FS-Cache: O-key=[8] '0f613b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000007e5c81de
	[  +0.001243] FS-Cache: N-key=[8] '0f613b0000000000'
	[  +2.263905] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000943] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000c8b3c65a
	[  +0.001019] FS-Cache: O-key=[8] '0e613b0000000000'
	[  +0.000696] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ef961079
	[  +0.001093] FS-Cache: N-key=[8] '0e613b0000000000'
	[  +0.429619] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001097] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=000000000d4a2f1f
	[  +0.001076] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000c2b6221a
	[  +0.001153] FS-Cache: N-key=[8] '14613b0000000000'
	
	* 
	* ==> etcd [074e3cb411fbae49fc6f05f0294dd7d0dde8aaee6560dea8418e5fabada34035] <==
	* {"level":"info","ts":"2023-10-05T21:01:16.339337Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:01:16.339782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-05T21:01:16.344474Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-05T21:01:16.344875Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-05T21:01:16.355374Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:01:16.356324Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:01:16.355906Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-05T21:01:16.710677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-05T21:01:16.710883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-05T21:01:16.710993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-05T21:01:16.711093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.711176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.711273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.711361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-05T21:01:16.715193Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.717368Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-223209 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:01:16.717522Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:01:16.718449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.720579Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.720749Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:01:16.719254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-05T21:01:16.730583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:01:16.731861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:01:16.732379Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:01:16.732579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [ba40c0d1610b60d45a5d26be4aab2af80d19a2158540f1a80e949ab6abe3ca61] <==
	* 2023/10/05 21:02:26 GCP Auth Webhook started!
	2023/10/05 21:02:56 Ready to marshal response ...
	2023/10/05 21:02:56 Ready to write response ...
	2023/10/05 21:02:57 Ready to marshal response ...
	2023/10/05 21:02:57 Ready to write response ...
	2023/10/05 21:03:04 Ready to marshal response ...
	2023/10/05 21:03:04 Ready to write response ...
	2023/10/05 21:03:06 Ready to marshal response ...
	2023/10/05 21:03:06 Ready to write response ...
	2023/10/05 21:03:12 Ready to marshal response ...
	2023/10/05 21:03:12 Ready to write response ...
	2023/10/05 21:03:12 Ready to marshal response ...
	2023/10/05 21:03:12 Ready to write response ...
	2023/10/05 21:03:12 Ready to marshal response ...
	2023/10/05 21:03:12 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:03:13 up  6:45,  0 users,  load average: 1.70, 2.34, 2.89
	Linux addons-223209 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [19d1a92e02d001e34ff2efd87c919c25cb07284a518f6a3f353c4ac05f21495b] <==
	* I1005 21:01:37.625014       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1005 21:01:37.625088       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1005 21:01:37.625198       1 main.go:116] setting mtu 1500 for CNI 
	I1005 21:01:37.625208       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 21:01:37.625223       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 21:02:07.867390       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1005 21:02:07.883266       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:07.883297       1 main.go:227] handling current node
	I1005 21:02:17.898927       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:17.898955       1 main.go:227] handling current node
	I1005 21:02:27.910864       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:27.910890       1 main.go:227] handling current node
	I1005 21:02:37.922480       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:37.922507       1 main.go:227] handling current node
	I1005 21:02:47.934081       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:47.934109       1 main.go:227] handling current node
	I1005 21:02:57.946453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:02:57.946484       1 main.go:227] handling current node
	I1005 21:03:07.951270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:03:07.951297       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [695e20ff6ea91a4042a7bc2aa3dc2d17272a444da85849c259b420a17f912233] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1005 21:01:42.756541       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1005 21:01:43.007009       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.28.17"}
	I1005 21:01:43.033790       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.3.38"}
	I1005 21:01:43.108390       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I1005 21:01:43.152015       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1005 21:01:43.152293       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1005 21:01:43.848075       1 aggregator.go:165] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1005 21:01:45.129066       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.110.29.152"}
	I1005 21:01:45.150542       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1005 21:01:45.359663       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.100.82.237"}
	W1005 21:01:45.865399       1 aggregator.go:165] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1005 21:01:46.411738       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.249.71"}
	E1005 21:02:17.286491       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.92.255:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.92.255:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.92.255:443: connect: connection refused
	W1005 21:02:17.286569       1 handler_proxy.go:93] no RequestInfo found in the context
	E1005 21:02:17.286631       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1005 21:02:17.287503       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.92.255:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.92.255:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.92.255:443: connect: connection refused
	I1005 21:02:17.287747       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1005 21:02:17.293227       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.92.255:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.92.255:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.92.255:443: connect: connection refused
	I1005 21:02:17.414933       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1005 21:02:20.041588       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1005 21:03:12.416632       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.254.184"}
	
	* 
	* ==> kube-controller-manager [dca30c30326b6c7c6dbe5bd35f3ffbfae4fb6fab8afb039c65ee2839527683d1] <==
	* I1005 21:02:37.276378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="57.279µs"
	I1005 21:02:50.032329       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 21:02:50.096288       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 21:02:50.503835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="94.186µs"
	I1005 21:02:52.286495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="10.152385ms"
	I1005 21:02:52.286608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="69.842µs"
	I1005 21:02:55.533625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="73.025µs"
	I1005 21:02:56.690604       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1005 21:02:56.990944       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1005 21:02:56.991183       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1005 21:02:57.014207       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1005 21:02:57.118109       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1005 21:03:01.765688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="84.225µs"
	I1005 21:03:07.400865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="20.597503ms"
	I1005 21:03:07.402577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="59.233µs"
	I1005 21:03:10.596522       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.065µs"
	I1005 21:03:12.452840       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-58b88cff49 to 1"
	I1005 21:03:12.472118       1 event.go:307] "Event occurred" object="headlamp/headlamp-58b88cff49" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-58b88cff49-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I1005 21:03:12.499302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="46.282064ms"
	E1005 21:03:12.499338       1 replica_set.go:557] sync "headlamp/headlamp-58b88cff49" failed with pods "headlamp-58b88cff49-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I1005 21:03:12.516985       1 event.go:307] "Event occurred" object="headlamp/headlamp-58b88cff49" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-58b88cff49-4xgkh"
	I1005 21:03:12.539836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="40.446031ms"
	I1005 21:03:12.583800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="43.919155ms"
	I1005 21:03:12.583989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="149.604µs"
	I1005 21:03:12.584119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="116.069µs"
	
	* 
	* ==> kube-proxy [2a7610d2c6238462ee37cbd4526d197a8a86a235fba199c56917eccfb223ba73] <==
	* I1005 21:01:37.522916       1 server_others.go:69] "Using iptables proxy"
	I1005 21:01:37.543860       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1005 21:01:37.635621       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:01:37.637849       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:01:37.637881       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:01:37.637889       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:01:37.637949       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:01:37.638183       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:01:37.638194       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:01:37.641190       1 config.go:188] "Starting service config controller"
	I1005 21:01:37.641254       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:01:37.641315       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:01:37.641321       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:01:37.641931       1 config.go:315] "Starting node config controller"
	I1005 21:01:37.641939       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:01:37.743502       1 shared_informer.go:318] Caches are synced for node config
	I1005 21:01:37.743530       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:01:37.743557       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0e7e370fa42eb35f418c414fe0709d9c0be0850bdfdc67f90e38ca54153ee27f] <==
	* I1005 21:01:19.976978       1 serving.go:348] Generated self-signed cert in-memory
	W1005 21:01:21.611710       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1005 21:01:21.611933       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:01:21.612026       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1005 21:01:21.612098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 21:01:21.631734       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1005 21:01:21.632014       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:01:21.634164       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1005 21:01:21.634454       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1005 21:01:21.634571       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:01:21.634679       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1005 21:01:21.648397       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:01:21.657427       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1005 21:01:23.135119       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.297578    1344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kjgb5\" (UniqueName: \"kubernetes.io/projected/295664eb-0493-448c-865b-3496e891de88-kube-api-access-kjgb5\") on node \"addons-223209\" DevicePath \"\""
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.304773    1344 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd70213-4315-40d6-b46c-96d44c97c78a-kube-api-access-4xh88" (OuterVolumeSpecName: "kube-api-access-4xh88") pod "6fd70213-4315-40d6-b46c-96d44c97c78a" (UID: "6fd70213-4315-40d6-b46c-96d44c97c78a"). InnerVolumeSpecName "kube-api-access-4xh88". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.398521    1344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4xh88\" (UniqueName: \"kubernetes.io/projected/6fd70213-4315-40d6-b46c-96d44c97c78a-kube-api-access-4xh88\") on node \"addons-223209\" DevicePath \"\""
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.604113    1344 scope.go:117] "RemoveContainer" containerID="fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.633836    1344 scope.go:117] "RemoveContainer" containerID="fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: E1005 21:03:11.645645    1344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\": not found" containerID="fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.645742    1344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4"} err="failed to get container status \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\": rpc error: code = NotFound desc = an error occurred when try to find container \"fcde0b2f2c46ef292cbac020462cd7654e555673b48325bb4ef6352448295ec4\": not found"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.645765    1344 scope.go:117] "RemoveContainer" containerID="0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.685862    1344 scope.go:117] "RemoveContainer" containerID="0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: E1005 21:03:11.687624    1344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\": not found" containerID="0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.687682    1344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a"} err="failed to get container status \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0960f57209361fade48b41c8560dbe2a7a832586f5ee0ebc1d9c502d09db6b9a\": not found"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.751347    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="295664eb-0493-448c-865b-3496e891de88" path="/var/lib/kubelet/pods/295664eb-0493-448c-865b-3496e891de88/volumes"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.751684    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="541ae7c6-c387-49fc-a5f7-ce71b8563f83" path="/var/lib/kubelet/pods/541ae7c6-c387-49fc-a5f7-ce71b8563f83/volumes"
	Oct 05 21:03:11 addons-223209 kubelet[1344]: I1005 21:03:11.751977    1344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6fd70213-4315-40d6-b46c-96d44c97c78a" path="/var/lib/kubelet/pods/6fd70213-4315-40d6-b46c-96d44c97c78a/volumes"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.533317    1344 topology_manager.go:215] "Topology Admit Handler" podUID="81c7656f-0f6b-4015-96ed-ed34fc06d207" podNamespace="headlamp" podName="headlamp-58b88cff49-4xgkh"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: E1005 21:03:12.533847    1344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="295664eb-0493-448c-865b-3496e891de88" containerName="registry"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: E1005 21:03:12.533869    1344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fd70213-4315-40d6-b46c-96d44c97c78a" containerName="registry-proxy"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: E1005 21:03:12.533901    1344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c517dae-6cf0-4d96-b4b7-edf2505dfe1d" containerName="helper-pod"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: E1005 21:03:12.533913    1344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="541ae7c6-c387-49fc-a5f7-ce71b8563f83" containerName="registry-test"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.533981    1344 memory_manager.go:346] "RemoveStaleState removing state" podUID="295664eb-0493-448c-865b-3496e891de88" containerName="registry"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.533995    1344 memory_manager.go:346] "RemoveStaleState removing state" podUID="6fd70213-4315-40d6-b46c-96d44c97c78a" containerName="registry-proxy"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.534003    1344 memory_manager.go:346] "RemoveStaleState removing state" podUID="541ae7c6-c387-49fc-a5f7-ce71b8563f83" containerName="registry-test"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.534012    1344 memory_manager.go:346] "RemoveStaleState removing state" podUID="8c517dae-6cf0-4d96-b4b7-edf2505dfe1d" containerName="helper-pod"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.614433    1344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpzw\" (UniqueName: \"kubernetes.io/projected/81c7656f-0f6b-4015-96ed-ed34fc06d207-kube-api-access-lcpzw\") pod \"headlamp-58b88cff49-4xgkh\" (UID: \"81c7656f-0f6b-4015-96ed-ed34fc06d207\") " pod="headlamp/headlamp-58b88cff49-4xgkh"
	Oct 05 21:03:12 addons-223209 kubelet[1344]: I1005 21:03:12.614504    1344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/81c7656f-0f6b-4015-96ed-ed34fc06d207-gcp-creds\") pod \"headlamp-58b88cff49-4xgkh\" (UID: \"81c7656f-0f6b-4015-96ed-ed34fc06d207\") " pod="headlamp/headlamp-58b88cff49-4xgkh"
	
	* 
	* ==> storage-provisioner [9718bda95560a4870592207607f9cd87fe6edee77674558bcefb13eb58071cc5] <==
	* I1005 21:01:42.822421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 21:01:42.857212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 21:01:42.860987       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 21:01:42.890859       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 21:01:42.891324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ebf97a6-726e-4529-8c22-73db6f04d521", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-223209_e43f7678-8b88-4326-964e-300d0baacbd1 became leader
	I1005 21:01:42.891357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-223209_e43f7678-8b88-4326-964e-300d0baacbd1!
	I1005 21:01:42.991709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-223209_e43f7678-8b88-4326-964e-300d0baacbd1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-223209 -n addons-223209
helpers_test.go:261: (dbg) Run:  kubectl --context addons-223209 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-58b88cff49-4xgkh ingress-nginx-admission-create-hlwd9 ingress-nginx-admission-patch-jzg5h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-223209 describe pod headlamp-58b88cff49-4xgkh ingress-nginx-admission-create-hlwd9 ingress-nginx-admission-patch-jzg5h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-223209 describe pod headlamp-58b88cff49-4xgkh ingress-nginx-admission-create-hlwd9 ingress-nginx-admission-patch-jzg5h: exit status 1 (128.276864ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-58b88cff49-4xgkh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-hlwd9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jzg5h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-223209 describe pod headlamp-58b88cff49-4xgkh ingress-nginx-admission-create-hlwd9 ingress-nginx-admission-patch-jzg5h: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image load --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 image load --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr: (4.241626742s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-282713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image load --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 image load --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr: (3.363293428s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-282713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.591856159s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-282713
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image load --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 image load --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr: (3.254423301s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-282713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image save gcr.io/google-containers/addon-resizer:functional-282713 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1005 21:09:58.314839 1151026 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:09:58.315081 1151026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:09:58.315092 1151026 out.go:309] Setting ErrFile to fd 2...
	I1005 21:09:58.315098 1151026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:09:58.315366 1151026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:09:58.316019 1151026 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:09:58.316150 1151026 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:09:58.316653 1151026 cli_runner.go:164] Run: docker container inspect functional-282713 --format={{.State.Status}}
	I1005 21:09:58.335741 1151026 ssh_runner.go:195] Run: systemctl --version
	I1005 21:09:58.335859 1151026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-282713
	I1005 21:09:58.355639 1151026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/functional-282713/id_rsa Username:docker}
	I1005 21:09:58.448923 1151026 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1005 21:09:58.448988 1151026 cache_images.go:254] Failed to load cached images for profile functional-282713. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1005 21:09:58.449008 1151026 cache_images.go:262] succeeded pushing to: 
	I1005 21:09:58.449013 1151026 cache_images.go:263] failed pushing to: functional-282713

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:205: (dbg) Run:  kubectl --context ingress-addon-legacy-027764 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:205: (dbg) Done: kubectl --context ingress-addon-legacy-027764 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.199879841s)
addons_test.go:230: (dbg) Run:  kubectl --context ingress-addon-legacy-027764 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-027764 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [73530b72-7556-4327-81c2-290aee9d96a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [73530b72-7556-4327-81c2-290aee9d96a3] Running
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.013916226s
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context ingress-addon-legacy-027764 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:295: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.019686214s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:297: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:301: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons disable ingress-dns --alsologtostderr -v=1: (11.253393638s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons disable ingress --alsologtostderr -v=1: (7.535149796s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-027764
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-027764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1fdcbd110990a1541e2afdb36baabdbe26e53cdb4d9a4e170cbbf5e33aec377",
	        "Created": "2023-10-05T21:10:24.845412032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1152291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:10:25.215182583Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/e1fdcbd110990a1541e2afdb36baabdbe26e53cdb4d9a4e170cbbf5e33aec377/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1fdcbd110990a1541e2afdb36baabdbe26e53cdb4d9a4e170cbbf5e33aec377/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1fdcbd110990a1541e2afdb36baabdbe26e53cdb4d9a4e170cbbf5e33aec377/hosts",
	        "LogPath": "/var/lib/docker/containers/e1fdcbd110990a1541e2afdb36baabdbe26e53cdb4d9a4e170cbbf5e33aec377/e1fdcbd110990a1541e2afdb36baabdbe26e53cdb4d9a4e170cbbf5e33aec377-json.log",
	        "Name": "/ingress-addon-legacy-027764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-027764:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-027764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/739e631cd578ff830175d6f65977ebae3e27e6c09837e0c3016df0c1a9eb7e4e-init/diff:/var/lib/docker/overlay2/0ac9dde3ffb5508a08f1d2d343ad7198828af6fb1810d9bf7c6479a8d59aaca8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/739e631cd578ff830175d6f65977ebae3e27e6c09837e0c3016df0c1a9eb7e4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/739e631cd578ff830175d6f65977ebae3e27e6c09837e0c3016df0c1a9eb7e4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/739e631cd578ff830175d6f65977ebae3e27e6c09837e0c3016df0c1a9eb7e4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-027764",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-027764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-027764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-027764",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-027764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b455e640cb64832a6289440bd36c2add6fc1117ee6805278706c4a20e99f82c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34027"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34024"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34026"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34025"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4b455e640cb6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-027764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e1fdcbd11099",
	                        "ingress-addon-legacy-027764"
	                    ],
	                    "NetworkID": "41d09a5fd7826e4bf656af7ccda9c9c42d7cd975c95b80175db34643ce753666",
	                    "EndpointID": "c3594d2c6cd6f1090b4cb9114b8c886b6fcfdafe625215cfffd2f60e7c4d566d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-027764 -n ingress-addon-legacy-027764
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-027764 logs -n 25: (1.554012933s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-282713 image ls                                                   | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	| image   | functional-282713 image load --daemon                                        | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-282713                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713 image ls                                                   | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	| image   | functional-282713 image load --daemon                                        | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-282713                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713 image ls                                                   | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	| image   | functional-282713 image save                                                 | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-282713                     |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713 image rm                                                   | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-282713                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713 image ls                                                   | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	| image   | functional-282713 image load                                                 | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713 image save --daemon                                        | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-282713                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713                                                            | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713                                                            | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-282713 ssh pgrep                                                  | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-282713                                                            | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713                                                            | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:09 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-282713 image build -t                                             | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:09 UTC | 05 Oct 23 21:10 UTC |
	|         | localhost/my-image:functional-282713                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-282713 image ls                                                   | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:10 UTC | 05 Oct 23 21:10 UTC |
	| delete  | -p functional-282713                                                         | functional-282713           | jenkins | v1.31.2 | 05 Oct 23 21:10 UTC | 05 Oct 23 21:10 UTC |
	| start   | -p ingress-addon-legacy-027764                                               | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:10 UTC | 05 Oct 23 21:11 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-027764                                                  | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:11 UTC | 05 Oct 23 21:11 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-027764                                                  | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:11 UTC | 05 Oct 23 21:11 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-027764                                                  | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:12 UTC | 05 Oct 23 21:12 UTC |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-027764 ip                                               | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:12 UTC | 05 Oct 23 21:12 UTC |
	| addons  | ingress-addon-legacy-027764                                                  | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:12 UTC | 05 Oct 23 21:12 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-027764                                                  | ingress-addon-legacy-027764 | jenkins | v1.31.2 | 05 Oct 23 21:12 UTC | 05 Oct 23 21:12 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:10:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:10:05.412041 1151829 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:10:05.412332 1151829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:10:05.412361 1151829 out.go:309] Setting ErrFile to fd 2...
	I1005 21:10:05.412383 1151829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:10:05.412689 1151829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:10:05.413125 1151829 out.go:303] Setting JSON to false
	I1005 21:10:05.414399 1151829 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24752,"bootTime":1696515454,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:10:05.414500 1151829 start.go:138] virtualization:  
	I1005 21:10:05.417354 1151829 out.go:177] * [ingress-addon-legacy-027764] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:10:05.420222 1151829 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:10:05.422264 1151829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:10:05.420429 1151829 notify.go:220] Checking for updates...
	I1005 21:10:05.426181 1151829 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:10:05.428462 1151829 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:10:05.430794 1151829 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:10:05.432947 1151829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:10:05.435221 1151829 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:10:05.462513 1151829 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:10:05.462607 1151829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:10:05.546633 1151829 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-05 21:10:05.535891558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:10:05.546783 1151829 docker.go:294] overlay module found
	I1005 21:10:05.549162 1151829 out.go:177] * Using the docker driver based on user configuration
	I1005 21:10:05.551244 1151829 start.go:298] selected driver: docker
	I1005 21:10:05.551264 1151829 start.go:902] validating driver "docker" against <nil>
	I1005 21:10:05.551280 1151829 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:10:05.551894 1151829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:10:05.613200 1151829 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-05 21:10:05.60369054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:10:05.613358 1151829 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:10:05.613599 1151829 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 21:10:05.617274 1151829 out.go:177] * Using Docker driver with root privileges
	I1005 21:10:05.627361 1151829 cni.go:84] Creating CNI manager for ""
	I1005 21:10:05.627385 1151829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:10:05.627397 1151829 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:10:05.627409 1151829 start_flags.go:321] config:
	{Name:ingress-addon-legacy-027764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-027764 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:10:05.631249 1151829 out.go:177] * Starting control plane node ingress-addon-legacy-027764 in cluster ingress-addon-legacy-027764
	I1005 21:10:05.633307 1151829 cache.go:122] Beginning downloading kic base image for docker with containerd
	I1005 21:10:05.635157 1151829 out.go:177] * Pulling base image ...
	I1005 21:10:05.637040 1151829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1005 21:10:05.637125 1151829 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:10:05.655020 1151829 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:10:05.655061 1151829 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:10:05.704636 1151829 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1005 21:10:05.704670 1151829 cache.go:57] Caching tarball of preloaded images
	I1005 21:10:05.704827 1151829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1005 21:10:05.707038 1151829 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1005 21:10:05.709252 1151829 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:10:05.822638 1151829 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1005 21:10:16.886672 1151829 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:10:16.886777 1151829 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:10:18.091090 1151829 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1005 21:10:18.091487 1151829 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/config.json ...
	I1005 21:10:18.091522 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/config.json: {Name:mka73bb9bcce075d88bfd68ebbc6a24099d08e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:18.091725 1151829 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:10:18.091750 1151829 start.go:365] acquiring machines lock for ingress-addon-legacy-027764: {Name:mk0c563c28f7bbbc07b64f82b230ec5a1497c64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:10:18.091810 1151829 start.go:369] acquired machines lock for "ingress-addon-legacy-027764" in 49.969µs
	I1005 21:10:18.091837 1151829 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-027764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-027764 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 21:10:18.091908 1151829 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:10:18.094435 1151829 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1005 21:10:18.094736 1151829 start.go:159] libmachine.API.Create for "ingress-addon-legacy-027764" (driver="docker")
	I1005 21:10:18.094766 1151829 client.go:168] LocalClient.Create starting
	I1005 21:10:18.094908 1151829 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem
	I1005 21:10:18.094951 1151829 main.go:141] libmachine: Decoding PEM data...
	I1005 21:10:18.094968 1151829 main.go:141] libmachine: Parsing certificate...
	I1005 21:10:18.095024 1151829 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem
	I1005 21:10:18.095066 1151829 main.go:141] libmachine: Decoding PEM data...
	I1005 21:10:18.095088 1151829 main.go:141] libmachine: Parsing certificate...
	I1005 21:10:18.095516 1151829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-027764 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:10:18.114876 1151829 cli_runner.go:211] docker network inspect ingress-addon-legacy-027764 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:10:18.114996 1151829 network_create.go:281] running [docker network inspect ingress-addon-legacy-027764] to gather additional debugging logs...
	I1005 21:10:18.115019 1151829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-027764
	W1005 21:10:18.133133 1151829 cli_runner.go:211] docker network inspect ingress-addon-legacy-027764 returned with exit code 1
	I1005 21:10:18.133169 1151829 network_create.go:284] error running [docker network inspect ingress-addon-legacy-027764]: docker network inspect ingress-addon-legacy-027764: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-027764 not found
	I1005 21:10:18.133185 1151829 network_create.go:286] output of [docker network inspect ingress-addon-legacy-027764]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-027764 not found
	
	** /stderr **
	I1005 21:10:18.133288 1151829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:10:18.152305 1151829 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000bfe6a0}
	I1005 21:10:18.152353 1151829 network_create.go:124] attempt to create docker network ingress-addon-legacy-027764 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 21:10:18.152417 1151829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-027764 ingress-addon-legacy-027764
	I1005 21:10:18.227961 1151829 network_create.go:108] docker network ingress-addon-legacy-027764 192.168.49.0/24 created
	I1005 21:10:18.227994 1151829 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-027764" container
	I1005 21:10:18.228066 1151829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:10:18.244555 1151829 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-027764 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-027764 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:10:18.263470 1151829 oci.go:103] Successfully created a docker volume ingress-addon-legacy-027764
	I1005 21:10:18.263555 1151829 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-027764-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-027764 --entrypoint /usr/bin/test -v ingress-addon-legacy-027764:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:10:19.771186 1151829 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-027764-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-027764 --entrypoint /usr/bin/test -v ingress-addon-legacy-027764:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.507582347s)
	I1005 21:10:19.771220 1151829 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-027764
	I1005 21:10:19.771249 1151829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1005 21:10:19.771270 1151829 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:10:19.771357 1151829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-027764:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:10:24.763398 1151829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-027764:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.991998617s)
	I1005 21:10:24.763430 1151829 kic.go:199] duration metric: took 4.992157 seconds to extract preloaded images to volume
	W1005 21:10:24.763564 1151829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:10:24.763689 1151829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:10:24.828770 1151829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-027764 --name ingress-addon-legacy-027764 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-027764 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-027764 --network ingress-addon-legacy-027764 --ip 192.168.49.2 --volume ingress-addon-legacy-027764:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:10:25.223777 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Running}}
	I1005 21:10:25.246769 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Status}}
	I1005 21:10:25.273503 1151829 cli_runner.go:164] Run: docker exec ingress-addon-legacy-027764 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:10:25.370973 1151829 oci.go:144] the created container "ingress-addon-legacy-027764" has a running status.
	I1005 21:10:25.371001 1151829 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa...
	I1005 21:10:25.720298 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 21:10:25.720357 1151829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:10:25.767424 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Status}}
	I1005 21:10:25.794256 1151829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:10:25.794287 1151829 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-027764 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:10:25.887116 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Status}}
	I1005 21:10:25.913931 1151829 machine.go:88] provisioning docker machine ...
	I1005 21:10:25.913960 1151829 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-027764"
	I1005 21:10:25.914034 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:25.948799 1151829 main.go:141] libmachine: Using SSH client type: native
	I1005 21:10:25.949272 1151829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I1005 21:10:25.949287 1151829 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-027764 && echo "ingress-addon-legacy-027764" | sudo tee /etc/hostname
	I1005 21:10:25.949986 1151829 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45386->127.0.0.1:34028: read: connection reset by peer
	I1005 21:10:29.094261 1151829 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-027764
	
	I1005 21:10:29.094352 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:29.113932 1151829 main.go:141] libmachine: Using SSH client type: native
	I1005 21:10:29.114352 1151829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I1005 21:10:29.114377 1151829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-027764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-027764/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-027764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:10:29.244399 1151829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:10:29.244425 1151829 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1112519/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1112519/.minikube}
	I1005 21:10:29.244447 1151829 ubuntu.go:177] setting up certificates
	I1005 21:10:29.244456 1151829 provision.go:83] configureAuth start
	I1005 21:10:29.244529 1151829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-027764
	I1005 21:10:29.264942 1151829 provision.go:138] copyHostCerts
	I1005 21:10:29.264986 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.pem
	I1005 21:10:29.265018 1151829 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.pem, removing ...
	I1005 21:10:29.265031 1151829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.pem
	I1005 21:10:29.265109 1151829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.pem (1082 bytes)
	I1005 21:10:29.265191 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-1112519/.minikube/cert.pem
	I1005 21:10:29.265214 1151829 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1112519/.minikube/cert.pem, removing ...
	I1005 21:10:29.265223 1151829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1112519/.minikube/cert.pem
	I1005 21:10:29.265252 1151829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/cert.pem (1123 bytes)
	I1005 21:10:29.265301 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-1112519/.minikube/key.pem
	I1005 21:10:29.265322 1151829 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1112519/.minikube/key.pem, removing ...
	I1005 21:10:29.265329 1151829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1112519/.minikube/key.pem
	I1005 21:10:29.265353 1151829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1112519/.minikube/key.pem (1675 bytes)
	I1005 21:10:29.265402 1151829 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-027764 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-027764]
	I1005 21:10:29.728282 1151829 provision.go:172] copyRemoteCerts
	I1005 21:10:29.728359 1151829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:10:29.728405 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:29.747575 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:10:29.841861 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 21:10:29.841922 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1005 21:10:29.870344 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 21:10:29.870411 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:10:29.898532 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 21:10:29.898592 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:10:29.926826 1151829 provision.go:86] duration metric: configureAuth took 682.35342ms
	I1005 21:10:29.926852 1151829 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:10:29.927042 1151829 config.go:182] Loaded profile config "ingress-addon-legacy-027764": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1005 21:10:29.927072 1151829 machine.go:91] provisioned docker machine in 4.013125935s
	I1005 21:10:29.927079 1151829 client.go:171] LocalClient.Create took 11.83230275s
	I1005 21:10:29.927097 1151829 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-027764" took 11.832360802s
	I1005 21:10:29.927105 1151829 start.go:300] post-start starting for "ingress-addon-legacy-027764" (driver="docker")
	I1005 21:10:29.927114 1151829 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:10:29.927171 1151829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:10:29.927216 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:29.945022 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:10:30.048359 1151829 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:10:30.054425 1151829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:10:30.054468 1151829 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:10:30.054480 1151829 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:10:30.054492 1151829 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:10:30.054505 1151829 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1112519/.minikube/addons for local assets ...
	I1005 21:10:30.054606 1151829 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1112519/.minikube/files for local assets ...
	I1005 21:10:30.054708 1151829 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/11179032.pem -> 11179032.pem in /etc/ssl/certs
	I1005 21:10:30.054720 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/11179032.pem -> /etc/ssl/certs/11179032.pem
	I1005 21:10:30.054841 1151829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:10:30.068310 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/11179032.pem --> /etc/ssl/certs/11179032.pem (1708 bytes)
	I1005 21:10:30.103705 1151829 start.go:303] post-start completed in 176.583123ms
	I1005 21:10:30.104164 1151829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-027764
	I1005 21:10:30.124888 1151829 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/config.json ...
	I1005 21:10:30.125207 1151829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:10:30.125254 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:30.144346 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:10:30.237677 1151829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:10:30.244587 1151829 start.go:128] duration metric: createHost completed in 12.152664078s
	I1005 21:10:30.244610 1151829 start.go:83] releasing machines lock for "ingress-addon-legacy-027764", held for 12.152785538s
	I1005 21:10:30.244702 1151829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-027764
	I1005 21:10:30.262477 1151829 ssh_runner.go:195] Run: cat /version.json
	I1005 21:10:30.262536 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:30.262788 1151829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:10:30.262871 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:10:30.286747 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:10:30.296134 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:10:30.513507 1151829 ssh_runner.go:195] Run: systemctl --version
	I1005 21:10:30.519212 1151829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:10:30.524996 1151829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 21:10:30.555798 1151829 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:10:30.555920 1151829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:10:30.590380 1151829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:10:30.590404 1151829 start.go:469] detecting cgroup driver to use...
	I1005 21:10:30.590464 1151829 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:10:30.590538 1151829 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1005 21:10:30.605332 1151829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 21:10:30.619032 1151829 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:10:30.619166 1151829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:10:30.635598 1151829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:10:30.652550 1151829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:10:30.748699 1151829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:10:30.849620 1151829 docker.go:213] disabling docker service ...
	I1005 21:10:30.849684 1151829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:10:30.870788 1151829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:10:30.885311 1151829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:10:30.989455 1151829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:10:31.083794 1151829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:10:31.098063 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:10:31.119914 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1005 21:10:31.133286 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 21:10:31.145662 1151829 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 21:10:31.145783 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 21:10:31.158052 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 21:10:31.170292 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 21:10:31.182056 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 21:10:31.194235 1151829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:10:31.205385 1151829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 21:10:31.217879 1151829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:10:31.228242 1151829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:10:31.238442 1151829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:10:31.327182 1151829 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 21:10:31.463733 1151829 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I1005 21:10:31.463809 1151829 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1005 21:10:31.468772 1151829 start.go:537] Will wait 60s for crictl version
	I1005 21:10:31.468836 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:31.473189 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:10:31.521928 1151829 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1005 21:10:31.521997 1151829 ssh_runner.go:195] Run: containerd --version
	I1005 21:10:31.549308 1151829 ssh_runner.go:195] Run: containerd --version
	I1005 21:10:31.579316 1151829 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.24 ...
	I1005 21:10:31.581176 1151829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-027764 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:10:31.598387 1151829 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 21:10:31.602928 1151829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:10:31.616188 1151829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1005 21:10:31.616262 1151829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:10:31.658832 1151829 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1005 21:10:31.658903 1151829 ssh_runner.go:195] Run: which lz4
	I1005 21:10:31.663409 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1005 21:10:31.663509 1151829 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1005 21:10:31.667606 1151829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1005 21:10:31.667637 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1005 21:10:33.977497 1151829 containerd.go:547] Took 2.314027 seconds to copy over tarball
	I1005 21:10:33.977615 1151829 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1005 21:10:36.725105 1151829 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.747441308s)
	I1005 21:10:36.725133 1151829 containerd.go:554] Took 2.747563 seconds to extract the tarball
	I1005 21:10:36.725143 1151829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1005 21:10:36.812733 1151829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:10:36.917127 1151829 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 21:10:37.053289 1151829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:10:37.098445 1151829 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1005 21:10:37.098471 1151829 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1005 21:10:37.098557 1151829 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:10:37.098769 1151829 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:10:37.098844 1151829 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:10:37.098917 1151829 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:10:37.098997 1151829 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:10:37.099097 1151829 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1005 21:10:37.099180 1151829 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1005 21:10:37.099250 1151829 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1005 21:10:37.100552 1151829 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:10:37.100959 1151829 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1005 21:10:37.101108 1151829 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:10:37.101227 1151829 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:10:37.101346 1151829 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1005 21:10:37.101476 1151829 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:10:37.101742 1151829 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:10:37.101901 1151829 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	W1005 21:10:37.502450 1151829 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.502587 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W1005 21:10:37.535783 1151829 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.535948 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W1005 21:10:37.544888 1151829 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.545136 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W1005 21:10:37.546743 1151829 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.546882 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W1005 21:10:37.579175 1151829 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.579367 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	I1005 21:10:37.583515 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W1005 21:10:37.632040 1151829 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.632205 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W1005 21:10:37.731589 1151829 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1005 21:10:37.731739 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1005 21:10:37.861060 1151829 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1005 21:10:37.861116 1151829 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:10:37.861202 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.041459 1151829 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1005 21:10:38.041509 1151829 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:10:38.041659 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.266641 1151829 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1005 21:10:38.266788 1151829 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:10:38.266870 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.266725 1151829 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1005 21:10:38.266978 1151829 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1005 21:10:38.267026 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.416829 1151829 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1005 21:10:38.416880 1151829 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1005 21:10:38.416935 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.416994 1151829 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1005 21:10:38.417015 1151829 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1005 21:10:38.417078 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.423433 1151829 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1005 21:10:38.423484 1151829 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:10:38.423544 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.437174 1151829 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1005 21:10:38.437349 1151829 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:10:38.437402 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1005 21:10:38.437443 1151829 ssh_runner.go:195] Run: which crictl
	I1005 21:10:38.437488 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:10:38.437297 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:10:38.437581 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1005 21:10:38.437362 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:10:38.437554 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1005 21:10:38.437693 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:10:38.626542 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1005 21:10:38.626580 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1005 21:10:38.626651 1151829 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:10:38.626692 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1005 21:10:38.626805 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1005 21:10:38.626758 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1005 21:10:38.631574 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1005 21:10:38.631670 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1005 21:10:38.685580 1151829 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1005 21:10:38.685656 1151829 cache_images.go:92] LoadImages completed in 1.587170318s
	W1005 21:10:38.685751 1151829 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1005 21:10:38.685804 1151829 ssh_runner.go:195] Run: sudo crictl info
	I1005 21:10:38.734021 1151829 cni.go:84] Creating CNI manager for ""
	I1005 21:10:38.734046 1151829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:10:38.734094 1151829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:10:38.734119 1151829 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-027764 NodeName:ingress-addon-legacy-027764 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1005 21:10:38.734267 1151829 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-027764"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:10:38.734358 1151829 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-027764 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-027764 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:10:38.734440 1151829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1005 21:10:38.745162 1151829 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:10:38.745275 1151829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:10:38.755669 1151829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1005 21:10:38.776659 1151829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1005 21:10:38.797840 1151829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1005 21:10:38.819012 1151829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:10:38.823328 1151829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:10:38.836350 1151829 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764 for IP: 192.168.49.2
	I1005 21:10:38.836429 1151829 certs.go:190] acquiring lock for shared ca certs: {Name:mkf0b25ffbb252c0d3d05e76f2fd0942f3acc421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:38.836592 1151829 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key
	I1005 21:10:38.836659 1151829 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key
	I1005 21:10:38.836724 1151829 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.key
	I1005 21:10:38.836738 1151829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt with IP's: []
	I1005 21:10:39.098935 1151829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt ...
	I1005 21:10:39.098969 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: {Name:mk1e5215fb2329594d1b876bc475fa4ec0bf472e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:39.099186 1151829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.key ...
	I1005 21:10:39.099204 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.key: {Name:mka33d0a27a262988fb6e68725234e24b99a05cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:39.099295 1151829 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key.dd3b5fb2
	I1005 21:10:39.099313 1151829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 21:10:39.302800 1151829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt.dd3b5fb2 ...
	I1005 21:10:39.302833 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt.dd3b5fb2: {Name:mkb6152887d80e3f33ae0e330521bec159dfcc0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:39.303022 1151829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key.dd3b5fb2 ...
	I1005 21:10:39.303040 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key.dd3b5fb2: {Name:mkab82c6d52d4ea49e2832fee5905512e0d6952b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:39.303160 1151829 certs.go:337] copying /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt
	I1005 21:10:39.303248 1151829 certs.go:341] copying /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key
	I1005 21:10:39.303304 1151829 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.key
	I1005 21:10:39.303321 1151829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.crt with IP's: []
	I1005 21:10:39.878387 1151829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.crt ...
	I1005 21:10:39.878419 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.crt: {Name:mka5e42d80c86cb4de0ded10d29b6f8a629aa597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:39.878616 1151829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.key ...
	I1005 21:10:39.878630 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.key: {Name:mk9ebdaad6065226924b2d44920631e1221ce04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:10:39.878722 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1005 21:10:39.878745 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1005 21:10:39.878762 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1005 21:10:39.878785 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1005 21:10:39.878800 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 21:10:39.878813 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 21:10:39.878828 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 21:10:39.878841 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 21:10:39.878896 1151829 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/1117903.pem (1338 bytes)
	W1005 21:10:39.878940 1151829 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/1117903_empty.pem, impossibly tiny 0 bytes
	I1005 21:10:39.878954 1151829 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:10:39.878988 1151829 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:10:39.879022 1151829 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:10:39.879068 1151829 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/certs/key.pem (1675 bytes)
	I1005 21:10:39.879117 1151829 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/11179032.pem (1708 bytes)
	I1005 21:10:39.879153 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:10:39.879183 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/1117903.pem -> /usr/share/ca-certificates/1117903.pem
	I1005 21:10:39.879197 1151829 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/11179032.pem -> /usr/share/ca-certificates/11179032.pem
	I1005 21:10:39.879759 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:10:39.909540 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 21:10:39.939412 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:10:39.968353 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 21:10:39.996404 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:10:40.031368 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 21:10:40.064544 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:10:40.095917 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:10:40.127821 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:10:40.158786 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/certs/1117903.pem --> /usr/share/ca-certificates/1117903.pem (1338 bytes)
	I1005 21:10:40.189605 1151829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/ssl/certs/11179032.pem --> /usr/share/ca-certificates/11179032.pem (1708 bytes)
	I1005 21:10:40.218438 1151829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:10:40.240976 1151829 ssh_runner.go:195] Run: openssl version
	I1005 21:10:40.248206 1151829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:10:40.259971 1151829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:10:40.264663 1151829 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:10:40.264780 1151829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:10:40.273688 1151829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:10:40.285788 1151829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117903.pem && ln -fs /usr/share/ca-certificates/1117903.pem /etc/ssl/certs/1117903.pem"
	I1005 21:10:40.297598 1151829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117903.pem
	I1005 21:10:40.302565 1151829 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 21:06 /usr/share/ca-certificates/1117903.pem
	I1005 21:10:40.302634 1151829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117903.pem
	I1005 21:10:40.311244 1151829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117903.pem /etc/ssl/certs/51391683.0"
	I1005 21:10:40.323830 1151829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11179032.pem && ln -fs /usr/share/ca-certificates/11179032.pem /etc/ssl/certs/11179032.pem"
	I1005 21:10:40.335387 1151829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11179032.pem
	I1005 21:10:40.340407 1151829 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 21:06 /usr/share/ca-certificates/11179032.pem
	I1005 21:10:40.340511 1151829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11179032.pem
	I1005 21:10:40.349649 1151829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11179032.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 21:10:40.361948 1151829 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:10:40.366617 1151829 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:10:40.366718 1151829 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-027764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-027764 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:10:40.366793 1151829 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1005 21:10:40.366860 1151829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:10:40.409260 1151829 cri.go:89] found id: ""
	I1005 21:10:40.409334 1151829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:10:40.420140 1151829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:10:40.430760 1151829 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 21:10:40.430870 1151829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:10:40.441701 1151829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:10:40.441766 1151829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 21:10:40.501820 1151829 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1005 21:10:40.502029 1151829 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 21:10:40.561469 1151829 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:10:40.561604 1151829 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:10:40.561669 1151829 kubeadm.go:322] OS: Linux
	I1005 21:10:40.561748 1151829 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 21:10:40.561827 1151829 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 21:10:40.561905 1151829 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 21:10:40.561989 1151829 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 21:10:40.562074 1151829 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 21:10:40.562162 1151829 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 21:10:40.653620 1151829 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:10:40.653774 1151829 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:10:40.653897 1151829 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:10:40.904823 1151829 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:10:40.906532 1151829 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:10:40.906789 1151829 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 21:10:41.012715 1151829 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:10:41.016495 1151829 out.go:204]   - Generating certificates and keys ...
	I1005 21:10:41.016649 1151829 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 21:10:41.016804 1151829 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 21:10:41.373101 1151829 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:10:42.106282 1151829 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:10:42.850400 1151829 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 21:10:43.193865 1151829 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 21:10:43.762982 1151829 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 21:10:43.763430 1151829 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-027764 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:10:45.168407 1151829 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 21:10:45.168871 1151829 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-027764 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:10:45.466908 1151829 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:10:46.327935 1151829 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:10:46.658012 1151829 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 21:10:46.658332 1151829 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:10:46.872539 1151829 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:10:47.462889 1151829 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:10:48.097569 1151829 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:10:48.986420 1151829 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:10:48.987398 1151829 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:10:48.989648 1151829 out.go:204]   - Booting up control plane ...
	I1005 21:10:48.989748 1151829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:10:49.002552 1151829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:10:49.002652 1151829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:10:49.002764 1151829 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:10:49.002950 1151829 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:11:02.005389 1151829 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.003408 seconds
	I1005 21:11:02.005512 1151829 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:11:02.023307 1151829 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:11:02.550527 1151829 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:11:02.550667 1151829 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-027764 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1005 21:11:03.058741 1151829 kubeadm.go:322] [bootstrap-token] Using token: lr4xfi.7mxii7sijqzhmem9
	I1005 21:11:03.060878 1151829 out.go:204]   - Configuring RBAC rules ...
	I1005 21:11:03.061008 1151829 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:11:03.065995 1151829 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:11:03.074389 1151829 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:11:03.081284 1151829 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:11:03.084453 1151829 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:11:03.087679 1151829 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:11:03.098867 1151829 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:11:03.376103 1151829 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 21:11:03.491585 1151829 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 21:11:03.492693 1151829 kubeadm.go:322] 
	I1005 21:11:03.492760 1151829 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 21:11:03.492766 1151829 kubeadm.go:322] 
	I1005 21:11:03.492839 1151829 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 21:11:03.492844 1151829 kubeadm.go:322] 
	I1005 21:11:03.492869 1151829 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 21:11:03.492925 1151829 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:11:03.492974 1151829 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:11:03.492979 1151829 kubeadm.go:322] 
	I1005 21:11:03.493028 1151829 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 21:11:03.493100 1151829 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:11:03.493165 1151829 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:11:03.493170 1151829 kubeadm.go:322] 
	I1005 21:11:03.493249 1151829 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:11:03.493322 1151829 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 21:11:03.493337 1151829 kubeadm.go:322] 
	I1005 21:11:03.493417 1151829 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lr4xfi.7mxii7sijqzhmem9 \
	I1005 21:11:03.493517 1151829 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:571092fde05632971def08ad2a457b2fd089790ef449e849065ad5827b1ed47e \
	I1005 21:11:03.493540 1151829 kubeadm.go:322]     --control-plane 
	I1005 21:11:03.493545 1151829 kubeadm.go:322] 
	I1005 21:11:03.493625 1151829 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:11:03.493630 1151829 kubeadm.go:322] 
	I1005 21:11:03.493741 1151829 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lr4xfi.7mxii7sijqzhmem9 \
	I1005 21:11:03.494096 1151829 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:571092fde05632971def08ad2a457b2fd089790ef449e849065ad5827b1ed47e 
	I1005 21:11:03.497239 1151829 kubeadm.go:322] W1005 21:10:40.499561    1104 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1005 21:11:03.497451 1151829 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:11:03.497556 1151829 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:11:03.497679 1151829 kubeadm.go:322] W1005 21:10:48.996814    1104 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1005 21:11:03.497806 1151829 kubeadm.go:322] W1005 21:10:48.998051    1104 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1005 21:11:03.497823 1151829 cni.go:84] Creating CNI manager for ""
	I1005 21:11:03.497830 1151829 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:11:03.501078 1151829 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:11:03.503142 1151829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:11:03.508186 1151829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1005 21:11:03.508244 1151829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:11:03.531633 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:11:03.986472 1151829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:11:03.986611 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:03.986690 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=ingress-addon-legacy-027764 minikube.k8s.io/updated_at=2023_10_05T21_11_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:04.160103 1151829 ops.go:34] apiserver oom_adj: -16
	I1005 21:11:04.160196 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:04.258843 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:04.864288 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:05.364300 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:05.863644 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:06.363946 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:06.863637 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:07.363677 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:07.863915 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:08.364287 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:08.863644 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:09.363868 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:09.863723 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:10.363788 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:10.864533 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:11.364183 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:11.864536 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:12.364105 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:12.864558 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:13.364278 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:13.863979 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:14.363856 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:14.863680 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:15.363677 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:15.863640 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:16.364186 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:16.864644 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:17.364098 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:17.864214 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:18.364524 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:18.864333 1151829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:11:18.962368 1151829 kubeadm.go:1081] duration metric: took 14.975808864s to wait for elevateKubeSystemPrivileges.
	I1005 21:11:18.962399 1151829 kubeadm.go:406] StartCluster complete in 38.595684966s
	I1005 21:11:18.962424 1151829 settings.go:142] acquiring lock: {Name:mk8ac06a875c8ddea9ee6a3c248c409c1d3f301d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:11:18.962478 1151829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:11:18.963286 1151829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1112519/kubeconfig: {Name:mk4151b883e566a83b3cbe0bf9e01957efa61f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:11:18.963503 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:11:18.963762 1151829 config.go:182] Loaded profile config "ingress-addon-legacy-027764": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1005 21:11:18.963901 1151829 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 21:11:18.963974 1151829 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-027764"
	I1005 21:11:18.963989 1151829 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-027764"
	I1005 21:11:18.964023 1151829 host.go:66] Checking if "ingress-addon-legacy-027764" exists ...
	I1005 21:11:18.964007 1151829 kapi.go:59] client config for ingress-addon-legacy-027764: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:11:18.964485 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Status}}
	I1005 21:11:18.964953 1151829 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-027764"
	I1005 21:11:18.964978 1151829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-027764"
	I1005 21:11:18.965268 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Status}}
	I1005 21:11:18.965523 1151829 cert_rotation.go:137] Starting client certificate rotation controller
	I1005 21:11:19.021575 1151829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:11:19.023338 1151829 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:11:19.023358 1151829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 21:11:19.023430 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:11:19.038482 1151829 kapi.go:59] client config for ingress-addon-legacy-027764: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:11:19.038780 1151829 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-027764"
	I1005 21:11:19.038809 1151829 host.go:66] Checking if "ingress-addon-legacy-027764" exists ...
	I1005 21:11:19.039362 1151829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-027764 --format={{.State.Status}}
	I1005 21:11:19.068037 1151829 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-027764" context rescaled to 1 replicas
	I1005 21:11:19.068073 1151829 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 21:11:19.070031 1151829 out.go:177] * Verifying Kubernetes components...
	I1005 21:11:19.072632 1151829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:11:19.084034 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:11:19.108773 1151829 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 21:11:19.108802 1151829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 21:11:19.108862 1151829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-027764
	I1005 21:11:19.142409 1151829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/ingress-addon-legacy-027764/id_rsa Username:docker}
	I1005 21:11:19.311590 1151829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 21:11:19.313267 1151829 kapi.go:59] client config for ingress-addon-legacy-027764: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:11:19.313871 1151829 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-027764" to be "Ready" ...
	I1005 21:11:19.317266 1151829 node_ready.go:49] node "ingress-addon-legacy-027764" has status "Ready":"True"
	I1005 21:11:19.317293 1151829 node_ready.go:38] duration metric: took 3.393828ms waiting for node "ingress-addon-legacy-027764" to be "Ready" ...
	I1005 21:11:19.317305 1151829 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:11:19.326473 1151829 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7xtns" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:19.336257 1151829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 21:11:19.461956 1151829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:11:19.872418 1151829 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 21:11:20.102550 1151829 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1005 21:11:20.104251 1151829 addons.go:502] enable addons completed in 1.140338784s: enabled=[default-storageclass storage-provisioner]
	I1005 21:11:21.341551 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:23.838661 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:25.840546 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:28.339229 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:30.838457 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:32.839466 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:35.338243 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:37.338713 1151829 pod_ready.go:102] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"False"
	I1005 21:11:38.338940 1151829 pod_ready.go:92] pod "coredns-66bff467f8-7xtns" in "kube-system" namespace has status "Ready":"True"
	I1005 21:11:38.338968 1151829 pod_ready.go:81] duration metric: took 19.012463315s waiting for pod "coredns-66bff467f8-7xtns" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.338980 1151829 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.344071 1151829 pod_ready.go:92] pod "etcd-ingress-addon-legacy-027764" in "kube-system" namespace has status "Ready":"True"
	I1005 21:11:38.344101 1151829 pod_ready.go:81] duration metric: took 5.110685ms waiting for pod "etcd-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.344115 1151829 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.349343 1151829 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-027764" in "kube-system" namespace has status "Ready":"True"
	I1005 21:11:38.349372 1151829 pod_ready.go:81] duration metric: took 5.24565ms waiting for pod "kube-apiserver-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.349385 1151829 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.354773 1151829 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-027764" in "kube-system" namespace has status "Ready":"True"
	I1005 21:11:38.354800 1151829 pod_ready.go:81] duration metric: took 5.40761ms waiting for pod "kube-controller-manager-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.354812 1151829 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-67v2w" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.359851 1151829 pod_ready.go:92] pod "kube-proxy-67v2w" in "kube-system" namespace has status "Ready":"True"
	I1005 21:11:38.359878 1151829 pod_ready.go:81] duration metric: took 5.0574ms waiting for pod "kube-proxy-67v2w" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.359890 1151829 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.533238 1151829 request.go:629] Waited for 173.220346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-027764
	I1005 21:11:38.733266 1151829 request.go:629] Waited for 197.184385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-027764
	I1005 21:11:38.736061 1151829 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-027764" in "kube-system" namespace has status "Ready":"True"
	I1005 21:11:38.736089 1151829 pod_ready.go:81] duration metric: took 376.191005ms waiting for pod "kube-scheduler-ingress-addon-legacy-027764" in "kube-system" namespace to be "Ready" ...
	I1005 21:11:38.736100 1151829 pod_ready.go:38] duration metric: took 19.418784267s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:11:38.736116 1151829 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:11:38.736176 1151829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:11:38.750245 1151829 api_server.go:72] duration metric: took 19.682126914s to wait for apiserver process to appear ...
	I1005 21:11:38.750275 1151829 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:11:38.750293 1151829 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 21:11:38.759205 1151829 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 21:11:38.760072 1151829 api_server.go:141] control plane version: v1.18.20
	I1005 21:11:38.760096 1151829 api_server.go:131] duration metric: took 9.813207ms to wait for apiserver health ...
	I1005 21:11:38.760108 1151829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:11:38.933529 1151829 request.go:629] Waited for 173.356459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:11:38.939678 1151829 system_pods.go:59] 8 kube-system pods found
	I1005 21:11:38.939717 1151829 system_pods.go:61] "coredns-66bff467f8-7xtns" [0657d65f-a570-42a8-bece-aa776facee82] Running
	I1005 21:11:38.939725 1151829 system_pods.go:61] "etcd-ingress-addon-legacy-027764" [3a556f10-92ec-42f7-bedc-ecd5c031718c] Running
	I1005 21:11:38.939730 1151829 system_pods.go:61] "kindnet-lsmdd" [7d8456d3-80d4-4278-849d-fa2a18ccc4ed] Running
	I1005 21:11:38.939738 1151829 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-027764" [26aa57d4-e25c-4e05-b437-2fd78537e6e7] Running
	I1005 21:11:38.939744 1151829 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-027764" [7fc246af-78b9-4185-9dff-6edaf9172559] Running
	I1005 21:11:38.939750 1151829 system_pods.go:61] "kube-proxy-67v2w" [bd7b2f04-a4b9-4c01-aad0-44881751cd1c] Running
	I1005 21:11:38.939762 1151829 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-027764" [6b1ffaa7-765d-45cf-a1de-016aeb8df89d] Running
	I1005 21:11:38.939772 1151829 system_pods.go:61] "storage-provisioner" [27ed5768-e50f-4d38-922b-943df1238e75] Running
	I1005 21:11:38.939777 1151829 system_pods.go:74] duration metric: took 179.663839ms to wait for pod list to return data ...
	I1005 21:11:38.939790 1151829 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:11:39.133796 1151829 request.go:629] Waited for 193.932686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 21:11:39.136411 1151829 default_sa.go:45] found service account: "default"
	I1005 21:11:39.136442 1151829 default_sa.go:55] duration metric: took 196.644745ms for default service account to be created ...
	I1005 21:11:39.136452 1151829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:11:39.333654 1151829 request.go:629] Waited for 197.13888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:11:39.339432 1151829 system_pods.go:86] 8 kube-system pods found
	I1005 21:11:39.339471 1151829 system_pods.go:89] "coredns-66bff467f8-7xtns" [0657d65f-a570-42a8-bece-aa776facee82] Running
	I1005 21:11:39.339479 1151829 system_pods.go:89] "etcd-ingress-addon-legacy-027764" [3a556f10-92ec-42f7-bedc-ecd5c031718c] Running
	I1005 21:11:39.339486 1151829 system_pods.go:89] "kindnet-lsmdd" [7d8456d3-80d4-4278-849d-fa2a18ccc4ed] Running
	I1005 21:11:39.339491 1151829 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-027764" [26aa57d4-e25c-4e05-b437-2fd78537e6e7] Running
	I1005 21:11:39.339497 1151829 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-027764" [7fc246af-78b9-4185-9dff-6edaf9172559] Running
	I1005 21:11:39.339501 1151829 system_pods.go:89] "kube-proxy-67v2w" [bd7b2f04-a4b9-4c01-aad0-44881751cd1c] Running
	I1005 21:11:39.339506 1151829 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-027764" [6b1ffaa7-765d-45cf-a1de-016aeb8df89d] Running
	I1005 21:11:39.339511 1151829 system_pods.go:89] "storage-provisioner" [27ed5768-e50f-4d38-922b-943df1238e75] Running
	I1005 21:11:39.339523 1151829 system_pods.go:126] duration metric: took 203.066094ms to wait for k8s-apps to be running ...
	I1005 21:11:39.339540 1151829 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:11:39.339599 1151829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:11:39.353467 1151829 system_svc.go:56] duration metric: took 13.917201ms WaitForService to wait for kubelet.
	I1005 21:11:39.353533 1151829 kubeadm.go:581] duration metric: took 20.285428527s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:11:39.353561 1151829 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:11:39.533936 1151829 request.go:629] Waited for 180.302622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1005 21:11:39.536772 1151829 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:11:39.536807 1151829 node_conditions.go:123] node cpu capacity is 2
	I1005 21:11:39.536819 1151829 node_conditions.go:105] duration metric: took 183.252874ms to run NodePressure ...
	I1005 21:11:39.536830 1151829 start.go:228] waiting for startup goroutines ...
	I1005 21:11:39.536838 1151829 start.go:233] waiting for cluster config update ...
	I1005 21:11:39.536869 1151829 start.go:242] writing updated cluster config ...
	I1005 21:11:39.537169 1151829 ssh_runner.go:195] Run: rm -f paused
	I1005 21:11:39.596243 1151829 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1005 21:11:39.598879 1151829 out.go:177] 
	W1005 21:11:39.601049 1151829 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1005 21:11:39.602842 1151829 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1005 21:11:39.604867 1151829 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-027764" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f870dbaab90aa       97e050c3e21e9       14 seconds ago       Exited              hello-world-app           2                   ab9c99f0dacd6       hello-world-app-5f5d8b66bb-lndfd
	6813503b08677       df8fd1ca35d66       42 seconds ago       Running             nginx                     0                   b86a0426ed54d       nginx
	50689f4c6e2f0       d7f0cba3aa5bf       56 seconds ago       Exited              controller                0                   ad845db8dc6c6       ingress-nginx-controller-7fcf777cb7-5sdmv
	05d0f9b5d8051       a883f7fc35610       About a minute ago   Exited              patch                     0                   16cfb80812734       ingress-nginx-admission-patch-qvm79
	f85bc876a7512       a883f7fc35610       About a minute ago   Exited              create                    0                   5499decca8cdb       ingress-nginx-admission-create-bqcln
	1d6b72411b283       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   e89bde0be53a2       coredns-66bff467f8-7xtns
	830dacee2b8d1       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   35a844eb57887       storage-provisioner
	7a8946001cc18       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   aff26b86c0dcc       kindnet-lsmdd
	70cb67f014556       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   15f4c241e93cb       kube-proxy-67v2w
	cc0a09fc99f4c       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   6d69b712d318b       etcd-ingress-addon-legacy-027764
	a25d9a856ab6e       095f37015706d       About a minute ago   Running             kube-scheduler            0                   e6abe81efb650       kube-scheduler-ingress-addon-legacy-027764
	c16fecb4a2582       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   8f4fe3ed9e433       kube-apiserver-ingress-addon-legacy-027764
	57e2dc3a2eb21       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   cf95a548ee0e7       kube-controller-manager-ingress-addon-legacy-027764
	
	* 
	* ==> containerd <==
	* Oct 05 21:12:31 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:31.111638720Z" level=info msg="RemoveContainer for \"b3f865269b4fd74d4458cedef765cdda43fa28fa190d4989fc10378407124814\" returns successfully"
	Oct 05 21:12:37 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:37.738169072Z" level=info msg="StopContainer for \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" with timeout 2 (s)"
	Oct 05 21:12:37 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:37.738854779Z" level=info msg="StopContainer for \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" with timeout 2 (s)"
	Oct 05 21:12:37 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:37.739522165Z" level=info msg="Stop container \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" with signal terminated"
	Oct 05 21:12:37 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:37.739617770Z" level=info msg="Skipping the sending of signal terminated to container \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" because a prior stop with timeout>0 request already sent the signal"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.740153356Z" level=info msg="Kill container \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\""
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.765330921Z" level=info msg="Kill container \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\""
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.834458873Z" level=info msg="shim disconnected" id=50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.834514602Z" level=warning msg="cleaning up after shim disconnected" id=50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725 namespace=k8s.io
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.834527426Z" level=info msg="cleaning up dead shim"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.845392156Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:12:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4601 runtime=io.containerd.runc.v2\n"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.848451840Z" level=info msg="StopContainer for \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" returns successfully"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.848465379Z" level=info msg="StopContainer for \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" returns successfully"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.849237715Z" level=info msg="StopPodSandbox for \"ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf\""
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.849306597Z" level=info msg="Container to stop \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.849600273Z" level=info msg="StopPodSandbox for \"ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf\""
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.849731185Z" level=info msg="Container to stop \"50689f4c6e2f0e10dbd2e71357448b5922a29d173b3f266f2267d3b6f652c725\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.886453903Z" level=info msg="shim disconnected" id=ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.887603402Z" level=warning msg="cleaning up after shim disconnected" id=ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf namespace=k8s.io
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.887640424Z" level=info msg="cleaning up dead shim"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.898257467Z" level=warning msg="cleanup warnings time=\"2023-10-05T21:12:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4639 runtime=io.containerd.runc.v2\n"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.951275482Z" level=info msg="TearDown network for sandbox \"ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf\" successfully"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.951445163Z" level=info msg="StopPodSandbox for \"ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf\" returns successfully"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.959846718Z" level=info msg="TearDown network for sandbox \"ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf\" successfully"
	Oct 05 21:12:39 ingress-addon-legacy-027764 containerd[825]: time="2023-10-05T21:12:39.959896121Z" level=info msg="StopPodSandbox for \"ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf\" returns successfully"
	
	* 
	* ==> coredns [1d6b72411b283dac05c6e7ccf6c00ffc8a99984081d217b347729703ef446ac9] <==
	* [INFO] 10.244.0.5:43348 - 46693 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033379s
	[INFO] 10.244.0.5:35809 - 34256 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000261s
	[INFO] 10.244.0.5:43348 - 16417 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051479s
	[INFO] 10.244.0.5:35809 - 5902 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028521s
	[INFO] 10.244.0.5:43348 - 2225 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037784s
	[INFO] 10.244.0.5:35809 - 12576 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022441s
	[INFO] 10.244.0.5:43348 - 54028 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034634s
	[INFO] 10.244.0.5:35809 - 59633 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024943s
	[INFO] 10.244.0.5:43348 - 27728 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000973631s
	[INFO] 10.244.0.5:35809 - 56691 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033263s
	[INFO] 10.244.0.5:34462 - 51917 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000026896s
	[INFO] 10.244.0.5:43348 - 22989 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000892861s
	[INFO] 10.244.0.5:35809 - 2189 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050428s
	[INFO] 10.244.0.5:43348 - 51526 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030687s
	[INFO] 10.244.0.5:35809 - 61490 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000769735s
	[INFO] 10.244.0.5:35809 - 53496 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000815454s
	[INFO] 10.244.0.5:35809 - 52840 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039753s
	[INFO] 10.244.0.5:34462 - 41061 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050379s
	[INFO] 10.244.0.5:34462 - 247 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036053s
	[INFO] 10.244.0.5:34462 - 49010 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003113s
	[INFO] 10.244.0.5:34462 - 924 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003118s
	[INFO] 10.244.0.5:34462 - 12904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031007s
	[INFO] 10.244.0.5:34462 - 10073 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000794671s
	[INFO] 10.244.0.5:34462 - 45345 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000811836s
	[INFO] 10.244.0.5:34462 - 5740 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035733s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-027764
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-027764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=ingress-addon-legacy-027764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_11_03_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:11:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-027764
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:12:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:12:36 +0000   Thu, 05 Oct 2023 21:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:12:36 +0000   Thu, 05 Oct 2023 21:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:12:36 +0000   Thu, 05 Oct 2023 21:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:12:36 +0000   Thu, 05 Oct 2023 21:11:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-027764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d4c47052143492789c1356bf4ebb483
	  System UUID:                cbb97361-a566-435d-9cd7-cce4a2877d95
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-lndfd                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 coredns-66bff467f8-7xtns                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     86s
	  kube-system                 etcd-ingress-addon-legacy-027764                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-lsmdd                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      86s
	  kube-system                 kube-apiserver-ingress-addon-legacy-027764             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-027764    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-67v2w                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-ingress-addon-legacy-027764             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  114s (x5 over 114s)  kubelet     Node ingress-addon-legacy-027764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x5 over 114s)  kubelet     Node ingress-addon-legacy-027764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x4 over 114s)  kubelet     Node ingress-addon-legacy-027764 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-027764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-027764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-027764 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-027764 status is now: NodeReady
	  Normal  Starting                 85s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000694] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001141] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=0000000072531354
	[  +0.001062] FS-Cache: N-key=[8] '18633b0000000000'
	[  +0.009061] FS-Cache: Duplicate cookie detected
	[  +0.000828] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=000000008701c6cb
	[  +0.001055] FS-Cache: O-key=[8] '18633b0000000000'
	[  +0.000725] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=0000000048b7c97c
	[  +0.001100] FS-Cache: N-key=[8] '18633b0000000000'
	[  +2.634972] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000981] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=000000000d4a2f1f
	[  +0.001039] FS-Cache: O-key=[8] '17633b0000000000'
	[  +0.000722] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=0000000072531354
	[  +0.001029] FS-Cache: N-key=[8] '17633b0000000000'
	[  +0.296847] FS-Cache: Duplicate cookie detected
	[  +0.000765] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000954] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000eeb44526
	[  +0.001163] FS-Cache: O-key=[8] '1d633b0000000000'
	[  +0.000704] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000002d60a56e
	[  +0.001048] FS-Cache: N-key=[8] '1d633b0000000000'
	[Oct 5 21:10] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [cc0a09fc99f4c821074a25b36eac358d75c88ac1459ce38893780f0c78ef506c] <==
	* raft2023/10/05 21:10:54 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/05 21:10:54 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/05 21:10:54 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/05 21:10:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-05 21:10:54.911459 W | auth: simple token is not cryptographically signed
	2023-10-05 21:10:54.991654 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/10/05 21:10:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-05 21:10:55.371229 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-05 21:10:55.403103 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-05 21:10:55.476731 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-05 21:10:55.629347 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-05 21:10:55.629399 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/10/05 21:10:56 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/05 21:10:56 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/05 21:10:56 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/05 21:10:56 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/05 21:10:56 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-05 21:10:56.149547 I | etcdserver: published {Name:ingress-addon-legacy-027764 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-05 21:10:56.149916 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-05 21:10:56.150282 I | embed: ready to serve client requests
	2023-10-05 21:10:56.151269 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-05 21:10:56.151458 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-05 21:10:56.151555 I | embed: ready to serve client requests
	2023-10-05 21:10:56.152901 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-05 21:10:56.167562 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  21:12:45 up  6:55,  0 users,  load average: 0.90, 1.51, 2.18
	Linux ingress-addon-legacy-027764 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [7a8946001cc188ea8edc057e8ce5ee46ff96e77e17d6bd10cfab6e26cf68c1c9] <==
	* I1005 21:11:21.529534       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1005 21:11:21.529601       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1005 21:11:21.529722       1 main.go:116] setting mtu 1500 for CNI 
	I1005 21:11:21.529745       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 21:11:21.529755       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 21:11:21.926006       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:11:21.926043       1 main.go:227] handling current node
	I1005 21:11:31.933735       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:11:31.933763       1 main.go:227] handling current node
	I1005 21:11:41.942533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:11:41.942561       1 main.go:227] handling current node
	I1005 21:11:51.952971       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:11:51.953006       1 main.go:227] handling current node
	I1005 21:12:01.963942       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:12:01.963973       1 main.go:227] handling current node
	I1005 21:12:11.968260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:12:11.968289       1 main.go:227] handling current node
	I1005 21:12:21.971612       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:12:21.971645       1 main.go:227] handling current node
	I1005 21:12:31.984353       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:12:31.984383       1 main.go:227] handling current node
	I1005 21:12:41.996806       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:12:41.996836       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c16fecb4a2582ddf700acd289387b04366c0cf6866012c01ed749bafc4aaafe9] <==
	* I1005 21:11:00.361457       1 cache.go:39] Caches are synced for autoregister controller
	I1005 21:11:00.396553       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1005 21:11:00.396763       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 21:11:00.396989       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1005 21:11:00.397099       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 21:11:01.136506       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1005 21:11:01.136548       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1005 21:11:01.179920       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1005 21:11:01.188775       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1005 21:11:01.188801       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1005 21:11:01.628503       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 21:11:01.667867       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1005 21:11:01.768335       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1005 21:11:01.769457       1 controller.go:609] quota admission added evaluator for: endpoints
	I1005 21:11:01.773279       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1005 21:11:02.601483       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1005 21:11:03.362149       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1005 21:11:03.476613       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1005 21:11:06.771153       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 21:11:19.009267       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1005 21:11:19.033705       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1005 21:11:40.475224       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1005 21:12:00.574090       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1005 21:12:36.838836       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x4008a598c8), encoder:(*versioning.codec)(0x4010e4c6e0), buf:(*bytes.Buffer)(0x4010042360)})
	E1005 21:12:37.760038       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [57e2dc3a2eb21712380af7c462e77032f2838093457269246ae79f9596dd7667] <==
	* I1005 21:11:19.109063       1 disruption.go:339] Sending events to api server.
	I1005 21:11:19.109092       1 shared_informer.go:230] Caches are synced for job 
	I1005 21:11:19.109109       1 shared_informer.go:230] Caches are synced for stateful set 
	I1005 21:11:19.109150       1 shared_informer.go:230] Caches are synced for GC 
	I1005 21:11:19.109499       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"835dbe22-c768-48dd-9db3-ada3385efd03", APIVersion:"apps/v1", ResourceVersion:"229", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-lsmdd
	I1005 21:11:19.109687       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"98963fcb-5d5b-41d3-9070-7b9fa225395e", APIVersion:"apps/v1", ResourceVersion:"334", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7xtns
	I1005 21:11:19.137432       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1005 21:11:19.137579       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1005 21:11:19.183114       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6aed6f2c-5386-4356-bc21-030447654c6a", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-67v2w
	I1005 21:11:19.192634       1 shared_informer.go:230] Caches are synced for resource quota 
	E1005 21:11:19.231436       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"835dbe22-c768-48dd-9db3-ada3385efd03", ResourceVersion:"229", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63832137063, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40018bd360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40018bd380)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40018bd3a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018bd3c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018bd3e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018bd400), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018bd420)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018bd460)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40012a94f0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40018ad0a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001bc7e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000b3820)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40018ad0f0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1005 21:11:19.253566       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1005 21:11:19.253601       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1005 21:11:19.257810       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1005 21:11:19.640229       1 request.go:621] Throttling request took 1.038422073s, request: GET:https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	I1005 21:11:20.242638       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1005 21:11:20.246757       1 shared_informer.go:230] Caches are synced for resource quota 
	I1005 21:11:40.465555       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"59b75426-8fc9-4ca2-aa6c-3ebaa7ea5b71", APIVersion:"apps/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1005 21:11:40.484413       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"199b61ad-4c6f-439e-bbe8-0a480db74728", APIVersion:"apps/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-5sdmv
	I1005 21:11:40.499737       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b282cb61-5710-45e7-aa16-f6f71887f35a", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-bqcln
	I1005 21:11:40.547848       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"46d5c215-e497-4564-80f7-8629d72194db", APIVersion:"batch/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-qvm79
	I1005 21:11:42.951989       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b282cb61-5710-45e7-aa16-f6f71887f35a", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1005 21:11:42.981849       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"46d5c215-e497-4564-80f7-8629d72194db", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1005 21:12:10.304533       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9cc0fafe-e2b6-4892-b93d-0543c708278e", APIVersion:"apps/v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1005 21:12:10.319551       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"f693fd9a-541a-4076-b7d3-42f240370081", APIVersion:"apps/v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-lndfd
	
	* 
	* ==> kube-proxy [70cb67f0145569697e11c482f3df54f199c076b2b075f5de2c637c0238220dca] <==
	* W1005 21:11:20.007016       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1005 21:11:20.036312       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1005 21:11:20.036363       1 server_others.go:186] Using iptables Proxier.
	I1005 21:11:20.037694       1 server.go:583] Version: v1.18.20
	I1005 21:11:20.043206       1 config.go:133] Starting endpoints config controller
	I1005 21:11:20.043239       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1005 21:11:20.043303       1 config.go:315] Starting service config controller
	I1005 21:11:20.043313       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1005 21:11:20.143419       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1005 21:11:20.143420       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a25d9a856ab6eb3374f561070f9c4b6c5819b72e574dbedbe06d2f0d8df61cbd] <==
	* I1005 21:11:00.375129       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:11:00.376113       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1005 21:11:00.376863       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1005 21:11:00.382521       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:11:00.382626       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 21:11:00.382697       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 21:11:00.377225       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:11:00.383554       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 21:11:00.383632       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 21:11:00.385066       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 21:11:00.385509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 21:11:00.385761       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:11:00.385833       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 21:11:00.385919       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 21:11:00.386433       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 21:11:01.257545       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 21:11:01.342434       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 21:11:01.361465       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 21:11:01.396591       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:11:01.397665       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 21:11:01.422956       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:11:01.430105       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 21:11:01.437552       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1005 21:11:03.875432       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1005 21:11:19.138434       1 factory.go:503] pod kube-system/coredns-66bff467f8-7xtns is already present in the backoff queue
	
	* 
	* ==> kubelet <==
	* Oct 05 21:12:17 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:17.062624    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c059352630de464ca9088ce64c6790973e192d0bce503ad95777e6f132fffef0
	Oct 05 21:12:17 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:17.063331    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b3f865269b4fd74d4458cedef765cdda43fa28fa190d4989fc10378407124814
	Oct 05 21:12:17 ingress-addon-legacy-027764 kubelet[1655]: E1005 21:12:17.063659    1655 pod_workers.go:191] Error syncing pod fe5b47e2-8aa9-4d7c-99bc-936eee420ef5 ("hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"
	Oct 05 21:12:18 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:18.067070    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b3f865269b4fd74d4458cedef765cdda43fa28fa190d4989fc10378407124814
	Oct 05 21:12:18 ingress-addon-legacy-027764 kubelet[1655]: E1005 21:12:18.067396    1655 pod_workers.go:191] Error syncing pod fe5b47e2-8aa9-4d7c-99bc-936eee420ef5 ("hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"
	Oct 05 21:12:26 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:26.310205    1655 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-kvjtw" (UniqueName: "kubernetes.io/secret/c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1-minikube-ingress-dns-token-kvjtw") pod "c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1" (UID: "c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1")
	Oct 05 21:12:26 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:26.316814    1655 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1-minikube-ingress-dns-token-kvjtw" (OuterVolumeSpecName: "minikube-ingress-dns-token-kvjtw") pod "c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1" (UID: "c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1"). InnerVolumeSpecName "minikube-ingress-dns-token-kvjtw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:12:26 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:26.410564    1655 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-kvjtw" (UniqueName: "kubernetes.io/secret/c11370a9-8b7e-49bd-bfbf-38b3e1dee0b1-minikube-ingress-dns-token-kvjtw") on node "ingress-addon-legacy-027764" DevicePath ""
	Oct 05 21:12:27 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:27.086098    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b6901ccfb2bbc8527c5d9c0413c9cefbf92229a91dec13a9f36c5fc5379deb81
	Oct 05 21:12:30 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:30.822023    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b3f865269b4fd74d4458cedef765cdda43fa28fa190d4989fc10378407124814
	Oct 05 21:12:31 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:31.097956    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b3f865269b4fd74d4458cedef765cdda43fa28fa190d4989fc10378407124814
	Oct 05 21:12:31 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:31.098353    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f870dbaab90aa31985f016ad59b405acc9d3d9803ca5bbe3da3ff5906431fc75
	Oct 05 21:12:31 ingress-addon-legacy-027764 kubelet[1655]: E1005 21:12:31.098626    1655 pod_workers.go:191] Error syncing pod fe5b47e2-8aa9-4d7c-99bc-936eee420ef5 ("hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"
	Oct 05 21:12:37 ingress-addon-legacy-027764 kubelet[1655]: E1005 21:12:37.740950    1655 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5sdmv.178b527779f4c370", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5sdmv", UID:"8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-027764"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13fe7916bdb5170, ext:94439200022, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13fe7916bdb5170, ext:94439200022, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5sdmv.178b527779f4c370" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 05 21:12:37 ingress-addon-legacy-027764 kubelet[1655]: E1005 21:12:37.748721    1655 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5sdmv.178b527779f4c370", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5sdmv", UID:"8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-027764"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13fe7916bdb5170, ext:94439200022, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13fe7916bca495d, ext:94438083851, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5sdmv.178b527779f4c370" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 05 21:12:40 ingress-addon-legacy-027764 kubelet[1655]: W1005 21:12:40.118948    1655 pod_container_deletor.go:77] Container "ad845db8dc6c6d0e2d2ecd5cc08fcbf9e2110898c60c43adc985bb74e3f281cf" not found in pod's containers
	Oct 05 21:12:41 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:41.858166    1655 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7jsrk" (UniqueName: "kubernetes.io/secret/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd-ingress-nginx-token-7jsrk") pod "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd" (UID: "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd")
	Oct 05 21:12:41 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:41.858223    1655 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd-webhook-cert") pod "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd" (UID: "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd")
	Oct 05 21:12:41 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:41.864664    1655 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd" (UID: "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:12:41 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:41.866085    1655 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd-ingress-nginx-token-7jsrk" (OuterVolumeSpecName: "ingress-nginx-token-7jsrk") pod "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd" (UID: "8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd"). InnerVolumeSpecName "ingress-nginx-token-7jsrk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:12:41 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:41.958649    1655 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7jsrk" (UniqueName: "kubernetes.io/secret/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd-ingress-nginx-token-7jsrk") on node "ingress-addon-legacy-027764" DevicePath ""
	Oct 05 21:12:41 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:41.959170    1655 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd-webhook-cert") on node "ingress-addon-legacy-027764" DevicePath ""
	Oct 05 21:12:42 ingress-addon-legacy-027764 kubelet[1655]: W1005 21:12:42.834569    1655 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/8b3964a4-30bb-4d3a-8daa-c34fc1c6fcbd/volumes" does not exist
	Oct 05 21:12:45 ingress-addon-legacy-027764 kubelet[1655]: I1005 21:12:45.821811    1655 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f870dbaab90aa31985f016ad59b405acc9d3d9803ca5bbe3da3ff5906431fc75
	Oct 05 21:12:45 ingress-addon-legacy-027764 kubelet[1655]: E1005 21:12:45.822156    1655 pod_workers.go:191] Error syncing pod fe5b47e2-8aa9-4d7c-99bc-936eee420ef5 ("hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lndfd_default(fe5b47e2-8aa9-4d7c-99bc-936eee420ef5)"
	
	* 
	* ==> storage-provisioner [830dacee2b8d1d13fe8549cb66ad7a845e19230451d05cfdb5bcf64b164ce536] <==
	* I1005 21:11:22.809754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 21:11:22.821787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 21:11:22.821957       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 21:11:22.835228       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 21:11:22.835930       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65d496ca-e34d-4d56-8bb5-9926d434d29e", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-027764_1a0468e9-5b2b-4430-9fd5-f38fb0baa829 became leader
	I1005 21:11:22.836214       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-027764_1a0468e9-5b2b-4430-9fd5-f38fb0baa829!
	I1005 21:11:22.936690       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-027764_1a0468e9-5b2b-4430-9fd5-f38fb0baa829!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-027764 -n ingress-addon-legacy-027764
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-027764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.00s)

                                                
                                    

Test pass (271/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.27
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.22
10 TestDownloadOnly/v1.28.2/json-events 10.74
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.59
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 139
27 TestAddons/parallel/Registry 14.88
29 TestAddons/parallel/InspektorGadget 11.01
30 TestAddons/parallel/MetricsServer 5.81
33 TestAddons/parallel/CSI 53.7
34 TestAddons/parallel/Headlamp 11.72
36 TestAddons/parallel/LocalPath 9.47
39 TestAddons/serial/GCPAuth/Namespaces 0.17
40 TestAddons/StoppedEnableDisable 12.34
41 TestCertOptions 33.65
42 TestCertExpiration 232.9
44 TestForceSystemdFlag 44.83
45 TestForceSystemdEnv 45.17
46 TestDockerEnvContainerd 52.44
51 TestErrorSpam/setup 31.87
52 TestErrorSpam/start 0.9
53 TestErrorSpam/status 1.11
54 TestErrorSpam/pause 1.84
55 TestErrorSpam/unpause 2.04
56 TestErrorSpam/stop 1.45
59 TestFunctional/serial/CopySyncFile 0
60 TestFunctional/serial/StartWithProxy 82.46
61 TestFunctional/serial/AuditLog 0
62 TestFunctional/serial/SoftStart 6.15
63 TestFunctional/serial/KubeContext 0.07
64 TestFunctional/serial/KubectlGetPods 0.1
67 TestFunctional/serial/CacheCmd/cache/add_remote 4.39
68 TestFunctional/serial/CacheCmd/cache/add_local 1.46
69 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
70 TestFunctional/serial/CacheCmd/cache/list 0.07
71 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
72 TestFunctional/serial/CacheCmd/cache/cache_reload 2.34
73 TestFunctional/serial/CacheCmd/cache/delete 0.12
74 TestFunctional/serial/MinikubeKubectlCmd 0.14
75 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
76 TestFunctional/serial/ExtraConfig 42.98
77 TestFunctional/serial/ComponentHealth 0.11
78 TestFunctional/serial/LogsCmd 1.86
79 TestFunctional/serial/LogsFileCmd 1.89
80 TestFunctional/serial/InvalidService 4.46
82 TestFunctional/parallel/ConfigCmd 0.46
83 TestFunctional/parallel/DashboardCmd 9.34
84 TestFunctional/parallel/DryRun 0.61
85 TestFunctional/parallel/InternationalLanguage 0.26
86 TestFunctional/parallel/StatusCmd 1.28
90 TestFunctional/parallel/ServiceCmdConnect 10.7
91 TestFunctional/parallel/AddonsCmd 0.2
92 TestFunctional/parallel/PersistentVolumeClaim 25.91
94 TestFunctional/parallel/SSHCmd 0.81
95 TestFunctional/parallel/CpCmd 1.63
97 TestFunctional/parallel/FileSync 0.32
98 TestFunctional/parallel/CertSync 2.36
102 TestFunctional/parallel/NodeLabels 0.11
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
106 TestFunctional/parallel/License 0.32
108 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.75
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
118 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
120 TestFunctional/parallel/ProfileCmd/profile_list 0.41
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
122 TestFunctional/parallel/MountCmd/any-port 8.1
123 TestFunctional/parallel/ServiceCmd/List 0.72
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
126 TestFunctional/parallel/ServiceCmd/Format 0.51
127 TestFunctional/parallel/ServiceCmd/URL 0.53
128 TestFunctional/parallel/MountCmd/specific-port 2.31
129 TestFunctional/parallel/MountCmd/VerifyCleanup 2.08
130 TestFunctional/parallel/Version/short 0.09
131 TestFunctional/parallel/Version/components 1.22
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
136 TestFunctional/parallel/ImageCommands/ImageBuild 3.16
137 TestFunctional/parallel/ImageCommands/Setup 2.09
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
148 TestFunctional/delete_addon-resizer_images 0.09
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
154 TestIngressAddonLegacy/StartLegacyK8sCluster 94.29
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.49
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.73
161 TestJSONOutput/start/Command 85.7
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.82
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.75
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 5.85
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.23
186 TestKicCustomNetwork/create_custom_network 42.11
187 TestKicCustomNetwork/use_default_bridge_network 36.73
188 TestKicExistingNetwork 36.22
189 TestKicCustomSubnet 38.11
190 TestKicStaticIP 34.53
191 TestMainNoArgs 0.05
192 TestMinikubeProfile 69.24
195 TestMountStart/serial/StartWithMountFirst 9.31
196 TestMountStart/serial/VerifyMountFirst 0.29
197 TestMountStart/serial/StartWithMountSecond 7.14
198 TestMountStart/serial/VerifyMountSecond 0.28
199 TestMountStart/serial/DeleteFirst 1.67
200 TestMountStart/serial/VerifyMountPostDelete 0.28
201 TestMountStart/serial/Stop 1.24
202 TestMountStart/serial/RestartStopped 7.32
203 TestMountStart/serial/VerifyMountPostStop 0.29
206 TestMultiNode/serial/FreshStart2Nodes 113.28
207 TestMultiNode/serial/DeployApp2Nodes 5.06
208 TestMultiNode/serial/PingHostFrom2Pods 1.07
209 TestMultiNode/serial/AddNode 17.53
210 TestMultiNode/serial/ProfileList 0.37
211 TestMultiNode/serial/CopyFile 10.84
212 TestMultiNode/serial/StopNode 2.37
213 TestMultiNode/serial/StartAfterStop 12.61
214 TestMultiNode/serial/RestartKeepsNodes 121.06
215 TestMultiNode/serial/DeleteNode 5.1
216 TestMultiNode/serial/StopMultiNode 24.13
217 TestMultiNode/serial/RestartMultiNode 81.17
218 TestMultiNode/serial/ValidateNameConflict 37.38
223 TestPreload 152.11
225 TestScheduledStopUnix 106.18
228 TestInsufficientStorage 11.76
229 TestRunningBinaryUpgrade 90.08
231 TestKubernetesUpgrade 433.23
232 TestMissingContainerUpgrade 176.54
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
235 TestNoKubernetes/serial/StartWithK8s 40.83
236 TestNoKubernetes/serial/StartWithStopK8s 16.42
237 TestNoKubernetes/serial/Start 5.58
238 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
239 TestNoKubernetes/serial/ProfileList 0.92
240 TestNoKubernetes/serial/Stop 1.25
241 TestNoKubernetes/serial/StartNoArgs 7.41
242 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
243 TestStoppedBinaryUpgrade/Setup 1.67
244 TestStoppedBinaryUpgrade/Upgrade 102.56
245 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
254 TestPause/serial/Start 92.01
255 TestPause/serial/SecondStartNoReconfiguration 7.42
256 TestPause/serial/Pause 1.4
257 TestPause/serial/VerifyStatus 0.53
258 TestPause/serial/Unpause 0.98
259 TestPause/serial/PauseAgain 1.33
260 TestPause/serial/DeletePaused 3
261 TestPause/serial/VerifyDeletedResources 0.54
269 TestNetworkPlugins/group/false 5.46
274 TestStartStop/group/old-k8s-version/serial/FirstStart 129.23
275 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
277 TestStartStop/group/old-k8s-version/serial/Stop 12.31
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
279 TestStartStop/group/old-k8s-version/serial/SecondStart 659.16
281 TestStartStop/group/no-preload/serial/FirstStart 73.27
282 TestStartStop/group/no-preload/serial/DeployApp 8.55
283 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
284 TestStartStop/group/no-preload/serial/Stop 12.18
285 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
286 TestStartStop/group/no-preload/serial/SecondStart 335.95
287 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.03
288 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
289 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
290 TestStartStop/group/no-preload/serial/Pause 3.38
292 TestStartStop/group/embed-certs/serial/FirstStart 58.75
293 TestStartStop/group/embed-certs/serial/DeployApp 7.5
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
295 TestStartStop/group/embed-certs/serial/Stop 12.18
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
297 TestStartStop/group/embed-certs/serial/SecondStart 343.08
298 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
300 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
301 TestStartStop/group/old-k8s-version/serial/Pause 3.46
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.62
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.57
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
312 TestStartStop/group/embed-certs/serial/Pause 3.41
314 TestStartStop/group/newest-cni/serial/FirstStart 44.13
315 TestStartStop/group/newest-cni/serial/DeployApp 0
316 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
317 TestStartStop/group/newest-cni/serial/Stop 1.28
318 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/newest-cni/serial/SecondStart 33.46
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
323 TestStartStop/group/newest-cni/serial/Pause 3.36
324 TestNetworkPlugins/group/auto/Start 61.75
325 TestNetworkPlugins/group/auto/KubeletFlags 0.32
326 TestNetworkPlugins/group/auto/NetCatPod 9.37
327 TestNetworkPlugins/group/auto/DNS 0.22
328 TestNetworkPlugins/group/auto/Localhost 0.18
329 TestNetworkPlugins/group/auto/HairPin 0.19
330 TestNetworkPlugins/group/kindnet/Start 85.28
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.03
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.81
335 TestNetworkPlugins/group/calico/Start 81.96
336 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
338 TestNetworkPlugins/group/kindnet/NetCatPod 11.4
339 TestNetworkPlugins/group/kindnet/DNS 0.25
340 TestNetworkPlugins/group/kindnet/Localhost 0.21
341 TestNetworkPlugins/group/kindnet/HairPin 0.22
342 TestNetworkPlugins/group/calico/ControllerPod 5.04
343 TestNetworkPlugins/group/custom-flannel/Start 64.52
344 TestNetworkPlugins/group/calico/KubeletFlags 0.36
345 TestNetworkPlugins/group/calico/NetCatPod 9.63
346 TestNetworkPlugins/group/calico/DNS 0.38
347 TestNetworkPlugins/group/calico/Localhost 0.23
348 TestNetworkPlugins/group/calico/HairPin 0.25
349 TestNetworkPlugins/group/enable-default-cni/Start 86.32
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.5
352 TestNetworkPlugins/group/custom-flannel/DNS 0.21
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
355 TestNetworkPlugins/group/flannel/Start 62.53
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
357 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.49
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
361 TestNetworkPlugins/group/bridge/Start 89.24
362 TestNetworkPlugins/group/flannel/ControllerPod 5.06
363 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
364 TestNetworkPlugins/group/flannel/NetCatPod 10.69
365 TestNetworkPlugins/group/flannel/DNS 0.27
366 TestNetworkPlugins/group/flannel/Localhost 0.25
367 TestNetworkPlugins/group/flannel/HairPin 0.24
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
369 TestNetworkPlugins/group/bridge/NetCatPod 10.35
370 TestNetworkPlugins/group/bridge/DNS 0.2
371 TestNetworkPlugins/group/bridge/Localhost 0.19
372 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (11.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-610377 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-610377 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.272308289s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-610377
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-610377: exit status 85 (217.341027ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-610377 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |          |
	|         | -p download-only-610377        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:00:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:00:12.848508 1117908 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:00:12.848711 1117908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:12.848721 1117908 out.go:309] Setting ErrFile to fd 2...
	I1005 21:00:12.848727 1117908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:12.848996 1117908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	W1005 21:00:12.849131 1117908 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-1112519/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-1112519/.minikube/config/config.json: no such file or directory
	I1005 21:00:12.849506 1117908 out.go:303] Setting JSON to true
	I1005 21:00:12.850513 1117908 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24159,"bootTime":1696515454,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:00:12.850586 1117908 start.go:138] virtualization:  
	I1005 21:00:12.854377 1117908 out.go:97] [download-only-610377] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:00:12.857254 1117908 out.go:169] MINIKUBE_LOCATION=17363
	W1005 21:00:12.854641 1117908 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball: no such file or directory
	I1005 21:00:12.854722 1117908 notify.go:220] Checking for updates...
	I1005 21:00:12.863808 1117908 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:00:12.867231 1117908 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:00:12.869518 1117908 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:00:12.871751 1117908 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1005 21:00:12.876292 1117908 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 21:00:12.876559 1117908 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:00:12.901891 1117908 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:00:12.901996 1117908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:13.002169 1117908 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-05 21:00:12.991718277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:13.002296 1117908 docker.go:294] overlay module found
	I1005 21:00:13.005116 1117908 out.go:97] Using the docker driver based on user configuration
	I1005 21:00:13.005155 1117908 start.go:298] selected driver: docker
	I1005 21:00:13.005164 1117908 start.go:902] validating driver "docker" against <nil>
	I1005 21:00:13.005286 1117908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:13.082055 1117908 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-05 21:00:13.071010058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:13.082214 1117908 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:00:13.082501 1117908 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1005 21:00:13.082656 1117908 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 21:00:13.085599 1117908 out.go:169] Using Docker driver with root privileges
	I1005 21:00:13.088188 1117908 cni.go:84] Creating CNI manager for ""
	I1005 21:00:13.088210 1117908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:00:13.088224 1117908 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:00:13.088240 1117908 start_flags.go:321] config:
	{Name:download-only-610377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-610377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:00:13.091348 1117908 out.go:97] Starting control plane node download-only-610377 in cluster download-only-610377
	I1005 21:00:13.091384 1117908 cache.go:122] Beginning downloading kic base image for docker with containerd
	I1005 21:00:13.094502 1117908 out.go:97] Pulling base image ...
	I1005 21:00:13.094532 1117908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1005 21:00:13.094677 1117908 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:00:13.112454 1117908 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:00:13.113186 1117908 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:00:13.113301 1117908 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:00:13.166395 1117908 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1005 21:00:13.166422 1117908 cache.go:57] Caching tarball of preloaded images
	I1005 21:00:13.166600 1117908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1005 21:00:13.169891 1117908 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1005 21:00:13.169918 1117908 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:00:13.311386 1117908 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1005 21:00:18.884607 1117908 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:00:22.227377 1117908 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:00:22.227476 1117908 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-610377"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (10.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-610377 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-610377 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.743982897s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (10.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-610377
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-610377: exit status 85 (79.185878ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-610377 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |          |
	|         | -p download-only-610377        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-610377 | jenkins | v1.31.2 | 05 Oct 23 21:00 UTC |          |
	|         | -p download-only-610377        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:00:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:00:24.340673 1117986 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:00:24.340905 1117986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:24.340917 1117986 out.go:309] Setting ErrFile to fd 2...
	I1005 21:00:24.340924 1117986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:00:24.341187 1117986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	W1005 21:00:24.341322 1117986 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-1112519/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-1112519/.minikube/config/config.json: no such file or directory
	I1005 21:00:24.341547 1117986 out.go:303] Setting JSON to true
	I1005 21:00:24.342612 1117986 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24171,"bootTime":1696515454,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:00:24.342681 1117986 start.go:138] virtualization:  
	I1005 21:00:24.352417 1117986 out.go:97] [download-only-610377] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:00:24.352991 1117986 notify.go:220] Checking for updates...
	I1005 21:00:24.366188 1117986 out.go:169] MINIKUBE_LOCATION=17363
	I1005 21:00:24.382560 1117986 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:00:24.391443 1117986 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:00:24.407849 1117986 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:00:24.425532 1117986 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1005 21:00:24.457021 1117986 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 21:00:24.457734 1117986 config.go:182] Loaded profile config "download-only-610377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1005 21:00:24.457791 1117986 start.go:810] api.Load failed for download-only-610377: filestore "download-only-610377": Docker machine "download-only-610377" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 21:00:24.457918 1117986 driver.go:378] Setting default libvirt URI to qemu:///system
	W1005 21:00:24.457952 1117986 start.go:810] api.Load failed for download-only-610377: filestore "download-only-610377": Docker machine "download-only-610377" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 21:00:24.484012 1117986 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:00:24.484097 1117986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:24.561121 1117986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:00:24.550904566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:24.561226 1117986 docker.go:294] overlay module found
	I1005 21:00:24.618896 1117986 out.go:97] Using the docker driver based on existing profile
	I1005 21:00:24.618932 1117986 start.go:298] selected driver: docker
	I1005 21:00:24.618942 1117986 start.go:902] validating driver "docker" against &{Name:download-only-610377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-610377 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:00:24.619156 1117986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:00:24.690786 1117986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:00:24.680780174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:00:24.691356 1117986 cni.go:84] Creating CNI manager for ""
	I1005 21:00:24.691373 1117986 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1005 21:00:24.691385 1117986 start_flags.go:321] config:
	{Name:download-only-610377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-610377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s}
	I1005 21:00:24.719930 1117986 out.go:97] Starting control plane node download-only-610377 in cluster download-only-610377
	I1005 21:00:24.719965 1117986 cache.go:122] Beginning downloading kic base image for docker with containerd
	I1005 21:00:24.752657 1117986 out.go:97] Pulling base image ...
	I1005 21:00:24.752697 1117986 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:00:24.752770 1117986 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:00:24.770206 1117986 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:00:24.770357 1117986 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:00:24.770389 1117986 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 21:00:24.770394 1117986 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 21:00:24.770402 1117986 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:00:24.852080 1117986 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4
	I1005 21:00:24.852117 1117986 cache.go:57] Caching tarball of preloaded images
	I1005 21:00:24.852266 1117986 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 21:00:24.896847 1117986 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1005 21:00:24.896881 1117986 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:00:25.127684 1117986 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:78379c9f92cd83c0fabfb9c72d4ec304 -> /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4
	I1005 21:00:33.487793 1117986 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4 ...
	I1005 21:00:33.487895 1117986 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-1112519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-610377"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-610377
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-750535 --alsologtostderr --binary-mirror http://127.0.0.1:36693 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-750535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-750535
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:926: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-223209
addons_test.go:926: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-223209: exit status 85 (73.824772ms)

                                                
                                                
-- stdout --
	* Profile "addons-223209" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-223209"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:937: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-223209
addons_test.go:937: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-223209: exit status 85 (63.958465ms)

                                                
                                                
-- stdout --
	* Profile "addons-223209" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-223209"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (139s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-223209 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-223209 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m18.998007948s)
--- PASS: TestAddons/Setup (139.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 31.547768ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-8687b" [295664eb-0493-448c-865b-3496e891de88] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.031753589s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gw7w5" [6fd70213-4315-40d6-b46c-96d44c97c78a] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017528166s
addons_test.go:338: (dbg) Run:  kubectl --context addons-223209 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-223209 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-223209 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.638526239s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 ip
2023/10/05 21:03:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rtjcf" [de147304-f4b0-4512-b93f-0090d0598e3c] Running
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012031544s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-223209
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-223209: (5.999608594s)
--- PASS: TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 4.211581ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-sfsm4" [e1e3a8e3-0927-46e4-b6db-53c5e662952e] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015483117s
addons_test.go:413: (dbg) Run:  kubectl --context addons-223209 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: csi-hostpath-driver pods stabilized in 7.118127ms
addons_test.go:562: (dbg) Run:  kubectl --context addons-223209 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-223209 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d8836c9-7080-4de0-b2f9-d41e9e70aa8d] Pending
helpers_test.go:344: "task-pv-pod" [3d8836c9-7080-4de0-b2f9-d41e9e70aa8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d8836c9-7080-4de0-b2f9-d41e9e70aa8d] Running
addons_test.go:577: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.011542688s
addons_test.go:582: (dbg) Run:  kubectl --context addons-223209 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-223209 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-223209 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-223209 delete pod task-pv-pod
addons_test.go:598: (dbg) Run:  kubectl --context addons-223209 delete pvc hpvc
addons_test.go:604: (dbg) Run:  kubectl --context addons-223209 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:614: (dbg) Run:  kubectl --context addons-223209 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:619: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [04c8f0ef-0e84-487b-a239-5da4b93636c8] Pending
helpers_test.go:344: "task-pv-pod-restore" [04c8f0ef-0e84-487b-a239-5da4b93636c8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [04c8f0ef-0e84-487b-a239-5da4b93636c8] Running
addons_test.go:619: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.011493096s
addons_test.go:624: (dbg) Run:  kubectl --context addons-223209 delete pod task-pv-pod-restore
addons_test.go:624: (dbg) Done: kubectl --context addons-223209 delete pod task-pv-pod-restore: (1.044761107s)
addons_test.go:628: (dbg) Run:  kubectl --context addons-223209 delete pvc hpvc-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-223209 delete volumesnapshot new-snapshot-demo
addons_test.go:636: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:636: (dbg) Done: out/minikube-linux-arm64 -p addons-223209 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.866339172s)
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:822: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-223209 --alsologtostderr -v=1
addons_test.go:822: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-223209 --alsologtostderr -v=1: (1.69638943s)
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-4xgkh" [81c7656f-0f6b-4015-96ed-ed34fc06d207] Pending
helpers_test.go:344: "headlamp-58b88cff49-4xgkh" [81c7656f-0f6b-4015-96ed-ed34fc06d207] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-4xgkh" [81c7656f-0f6b-4015-96ed-ed34fc06d207] Running
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.027339433s
--- PASS: TestAddons/parallel/Headlamp (11.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:871: (dbg) Run:  kubectl --context addons-223209 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:877: (dbg) Run:  kubectl --context addons-223209 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:881: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-223209 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8d56046b-ff87-4b47-8bb2-c3d6fafdd97e] Pending
helpers_test.go:344: "test-local-path" [8d56046b-ff87-4b47-8bb2-c3d6fafdd97e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8d56046b-ff87-4b47-8bb2-c3d6fafdd97e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8d56046b-ff87-4b47-8bb2-c3d6fafdd97e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.011368195s
addons_test.go:889: (dbg) Run:  kubectl --context addons-223209 get pvc test-pvc -o=json
addons_test.go:898: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 ssh "cat /opt/local-path-provisioner/pvc-f6a4555f-aa36-48f9-875a-61866ab03538_default_test-pvc/file1"
addons_test.go:910: (dbg) Run:  kubectl --context addons-223209 delete pod test-local-path
addons_test.go:914: (dbg) Run:  kubectl --context addons-223209 delete pvc test-pvc
addons_test.go:918: (dbg) Run:  out/minikube-linux-arm64 -p addons-223209 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:648: (dbg) Run:  kubectl --context addons-223209 create ns new-namespace
addons_test.go:662: (dbg) Run:  kubectl --context addons-223209 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-223209
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-223209: (12.069772933s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-223209
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-223209
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-223209
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (33.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-084935 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1005 21:40:59.004785 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-084935 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.941027452s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-084935 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-084935 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-084935 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-084935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-084935
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-084935: (2.005710668s)
--- PASS: TestCertOptions (33.65s)

                                                
                                    
x
+
TestCertExpiration (232.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-342181 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-342181 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.769868959s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-342181 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-342181 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.641376707s)
helpers_test.go:175: Cleaning up "cert-expiration-342181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-342181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-342181: (2.490140733s)
--- PASS: TestCertExpiration (232.90s)

                                                
                                    
x
+
TestForceSystemdFlag (44.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-847152 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-847152 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.1239054s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-847152 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-847152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-847152
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-847152: (2.342225767s)
--- PASS: TestForceSystemdFlag (44.83s)

                                                
                                    
x
+
TestForceSystemdEnv (45.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-147855 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-147855 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.741512194s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-147855 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-147855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-147855
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-147855: (2.065305834s)
--- PASS: TestForceSystemdEnv (45.17s)

                                                
                                    
x
+
TestDockerEnvContainerd (52.44s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-265227 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-265227 --driver=docker  --container-runtime=containerd: (36.338586462s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-265227"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-265227": (1.358645405s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XsR9xfeGqdhZ/agent.1135184" SSH_AGENT_PID="1135185" DOCKER_HOST=ssh://docker@127.0.0.1:34013 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XsR9xfeGqdhZ/agent.1135184" SSH_AGENT_PID="1135185" DOCKER_HOST=ssh://docker@127.0.0.1:34013 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XsR9xfeGqdhZ/agent.1135184" SSH_AGENT_PID="1135185" DOCKER_HOST=ssh://docker@127.0.0.1:34013 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.506637021s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XsR9xfeGqdhZ/agent.1135184" SSH_AGENT_PID="1135185" DOCKER_HOST=ssh://docker@127.0.0.1:34013 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-265227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-265227
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-265227: (2.022365656s)
--- PASS: TestDockerEnvContainerd (52.44s)

                                                
                                    
x
+
TestErrorSpam/setup (31.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-943279 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-943279 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-943279 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-943279 --driver=docker  --container-runtime=containerd: (31.872344459s)
--- PASS: TestErrorSpam/setup (31.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.04s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 unpause
--- PASS: TestErrorSpam/unpause (2.04s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 stop: (1.247326146s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-943279 --log_dir /tmp/nospam-943279 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17363-1112519/.minikube/files/etc/test/nested/copy/1117903/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-282713 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-282713 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m22.464616444s)
--- PASS: TestFunctional/serial/StartWithProxy (82.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-282713 --alsologtostderr -v=8
E1005 21:07:55.956531 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:55.962309 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:55.973409 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:55.993854 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:56.034822 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:56.115533 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:56.276673 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:56.597064 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:07:57.237891 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-282713 --alsologtostderr -v=8: (6.150269851s)
functional_test.go:659: soft start took 6.150807645s for "functional-282713" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-282713 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cache add registry.k8s.io/pause:3.1
E1005 21:07:58.518847 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 cache add registry.k8s.io/pause:3.1: (1.493104154s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cache add registry.k8s.io/pause:3.3
E1005 21:08:01.080085 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 cache add registry.k8s.io/pause:3.3: (1.5598653s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 cache add registry.k8s.io/pause:latest: (1.336668145s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-282713 /tmp/TestFunctionalserialCacheCmdcacheadd_local1636960028/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cache add minikube-local-cache-test:functional-282713
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 cache add minikube-local-cache-test:functional-282713: (1.002085448s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cache delete minikube-local-cache-test:functional-282713
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-282713
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (340.58101ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cache reload
E1005 21:08:06.201025 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 cache reload: (1.332916157s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 kubectl -- --context functional-282713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-282713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-282713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1005 21:08:16.442011 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:08:36.922235 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-282713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.984053409s)
functional_test.go:757: restart took 42.984158098s for "functional-282713" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-282713 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 logs: (1.859057054s)
--- PASS: TestFunctional/serial/LogsCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 logs --file /tmp/TestFunctionalserialLogsFileCmd3988266275/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 logs --file /tmp/TestFunctionalserialLogsFileCmd3988266275/001/logs.txt: (1.885455739s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-282713 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-282713
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-282713: exit status 115 (422.903687ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31689 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-282713 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 config get cpus: exit status 14 (90.056979ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 config get cpus: exit status 14 (73.98844ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-282713 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-282713 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1148955: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-282713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-282713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (303.652625ms)

                                                
                                                
-- stdout --
	* [functional-282713] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:09:31.757255 1148623 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:09:31.757601 1148623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:09:31.757639 1148623 out.go:309] Setting ErrFile to fd 2...
	I1005 21:09:31.757667 1148623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:09:31.757972 1148623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:09:31.758390 1148623 out.go:303] Setting JSON to false
	I1005 21:09:31.759638 1148623 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24718,"bootTime":1696515454,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:09:31.759743 1148623 start.go:138] virtualization:  
	I1005 21:09:31.766055 1148623 out.go:177] * [functional-282713] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:09:31.768985 1148623 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:09:31.771556 1148623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:09:31.769130 1148623 notify.go:220] Checking for updates...
	I1005 21:09:31.777575 1148623 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:09:31.780672 1148623 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:09:31.782995 1148623 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:09:31.785325 1148623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:09:31.790552 1148623 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:09:31.791295 1148623 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:09:31.830625 1148623 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:09:31.830722 1148623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:09:31.959506 1148623 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-05 21:09:31.94052029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:09:31.959610 1148623 docker.go:294] overlay module found
	I1005 21:09:31.963196 1148623 out.go:177] * Using the docker driver based on existing profile
	I1005 21:09:31.965461 1148623 start.go:298] selected driver: docker
	I1005 21:09:31.965478 1148623 start.go:902] validating driver "docker" against &{Name:functional-282713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-282713 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:09:31.965601 1148623 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:09:31.968371 1148623 out.go:177] 
	W1005 21:09:31.970748 1148623 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1005 21:09:31.973115 1148623 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-282713 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-282713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-282713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (255.994671ms)

                                                
                                                
-- stdout --
	* [functional-282713] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:09:31.477801 1148579 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:09:31.478118 1148579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:09:31.478149 1148579 out.go:309] Setting ErrFile to fd 2...
	I1005 21:09:31.478177 1148579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:09:31.478704 1148579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:09:31.479287 1148579 out.go:303] Setting JSON to false
	I1005 21:09:31.480677 1148579 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24718,"bootTime":1696515454,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:09:31.480809 1148579 start.go:138] virtualization:  
	I1005 21:09:31.488711 1148579 out.go:177] * [functional-282713] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1005 21:09:31.491083 1148579 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:09:31.494166 1148579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:09:31.491235 1148579 notify.go:220] Checking for updates...
	I1005 21:09:31.496416 1148579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:09:31.498455 1148579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:09:31.500441 1148579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:09:31.502282 1148579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:09:31.504823 1148579 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:09:31.505490 1148579 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:09:31.539181 1148579 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:09:31.539284 1148579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:09:31.656304 1148579 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-05 21:09:31.64468299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:09:31.656412 1148579 docker.go:294] overlay module found
	I1005 21:09:31.659129 1148579 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1005 21:09:31.661648 1148579 start.go:298] selected driver: docker
	I1005 21:09:31.661665 1148579 start.go:902] validating driver "docker" against &{Name:functional-282713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-282713 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:09:31.661784 1148579 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:09:31.664744 1148579 out.go:177] 
	W1005 21:09:31.667300 1148579 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1005 21:09:31.669342 1148579 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-282713 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-282713 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-55fhf" [b5ca9124-2022-488d-b652-3dde414635c6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-55fhf" [b5ca9124-2022-488d-b652-3dde414635c6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.017623665s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30410
functional_test.go:1674: http://192.168.49.2:30410: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-55fhf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30410
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f1ac84a7-0b26-4f69-bc9c-564bda903a6c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.049685236s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-282713 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-282713 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-282713 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-282713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c940635d-05cc-4195-81ef-1880317a1e0a] Pending
helpers_test.go:344: "sp-pod" [c940635d-05cc-4195-81ef-1880317a1e0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c940635d-05cc-4195-81ef-1880317a1e0a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.025353269s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-282713 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-282713 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-282713 delete -f testdata/storage-provisioner/pod.yaml: (1.557695276s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-282713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [91b329aa-7bc3-4812-8529-081e3cf06a6d] Pending
E1005 21:09:17.882959 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [91b329aa-7bc3-4812-8529-081e3cf06a6d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [91b329aa-7bc3-4812-8529-081e3cf06a6d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.022561804s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-282713 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh -n functional-282713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 cp functional-282713:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1055327815/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh -n functional-282713 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1117903/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /etc/test/nested/copy/1117903/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1117903.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /etc/ssl/certs/1117903.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1117903.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /usr/share/ca-certificates/1117903.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11179032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /etc/ssl/certs/11179032.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11179032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /usr/share/ca-certificates/11179032.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-282713 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh "sudo systemctl is-active docker": exit status 1 (376.674915ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh "sudo systemctl is-active crio": exit status 1 (360.886341ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-282713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-282713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-282713 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-282713 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1146369: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-282713 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-282713 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0e04cf8f-3ca5-4310-b3c4-7125c422924c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0e04cf8f-3ca5-4310-b3c4-7125c422924c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.015814273s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-282713 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.141.182 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-282713 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-282713 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-282713 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-dqf7q" [78abc06e-a069-4f8b-9d28-5be8cc0bcfc4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-dqf7q" [78abc06e-a069-4f8b-9d28-5be8cc0bcfc4] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.026714404s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "352.148266ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "52.90887ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "355.692485ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "56.402792ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdany-port3027938026/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696540167280084886" to /tmp/TestFunctionalparallelMountCmdany-port3027938026/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696540167280084886" to /tmp/TestFunctionalparallelMountCmdany-port3027938026/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696540167280084886" to /tmp/TestFunctionalparallelMountCmdany-port3027938026/001/test-1696540167280084886
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (546.027471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  5 21:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  5 21:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  5 21:09 test-1696540167280084886
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh cat /mount-9p/test-1696540167280084886
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-282713 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [98e49ce6-02ab-4919-9f6c-4dba05d7e20e] Pending
helpers_test.go:344: "busybox-mount" [98e49ce6-02ab-4919-9f6c-4dba05d7e20e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [98e49ce6-02ab-4919-9f6c-4dba05d7e20e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [98e49ce6-02ab-4919-9f6c-4dba05d7e20e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.015602799s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-282713 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdany-port3027938026/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 service list -o json
functional_test.go:1493: Took "538.893759ms" to run "out/minikube-linux-arm64 -p functional-282713 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30243
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30243
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdspecific-port2607063557/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (536.60018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdspecific-port2607063557/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh "sudo umount -f /mount-9p": exit status 1 (368.737603ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-282713 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdspecific-port2607063557/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4002839997/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4002839997/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4002839997/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T" /mount1: (1.179099064s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-282713 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4002839997/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4002839997/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-282713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4002839997/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 version -o=json --components: (1.222602703s)
--- PASS: TestFunctional/parallel/Version/components (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-282713 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-282713
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-282713 image ls --format short --alsologtostderr:
I1005 21:09:59.139717 1151087 out.go:296] Setting OutFile to fd 1 ...
I1005 21:09:59.140061 1151087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.140088 1151087 out.go:309] Setting ErrFile to fd 2...
I1005 21:09:59.140109 1151087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.140449 1151087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
I1005 21:09:59.141152 1151087 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.141373 1151087 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.142055 1151087 cli_runner.go:164] Run: docker container inspect functional-282713 --format={{.State.Status}}
I1005 21:09:59.164064 1151087 ssh_runner.go:195] Run: systemctl --version
I1005 21:09:59.164120 1151087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-282713
I1005 21:09:59.187211 1151087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/functional-282713/id_rsa Username:docker}
I1005 21:09:59.281069 1151087 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-282713 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-282713  | sha256:c6ce5a | 1.01kB |
| docker.io/library/nginx                     | latest             | sha256:2a4fbb | 67.2MB |
| registry.k8s.io/kube-apiserver              | v1.28.2            | sha256:30bb49 | 31.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:df8fd1 | 17.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2            | sha256:89d57b | 30.3MB |
| registry.k8s.io/kube-proxy                  | v1.28.2            | sha256:7da62c | 22MB   |
| registry.k8s.io/kube-scheduler              | v1.28.2            | sha256:64fc40 | 17.1MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-282713 image ls --format table --alsologtostderr:
I1005 21:09:59.727789 1151219 out.go:296] Setting OutFile to fd 1 ...
I1005 21:09:59.728120 1151219 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.728129 1151219 out.go:309] Setting ErrFile to fd 2...
I1005 21:09:59.728135 1151219 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.728451 1151219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
I1005 21:09:59.729198 1151219 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.729321 1151219 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.729881 1151219 cli_runner.go:164] Run: docker container inspect functional-282713 --format={{.State.Status}}
I1005 21:09:59.760357 1151219 ssh_runner.go:195] Run: systemctl --version
I1005 21:09:59.760411 1151219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-282713
I1005 21:09:59.794010 1151219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/functional-282713/id_rsa Username:docker}
I1005 21:09:59.889622 1151219 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-282713 image ls --format json --alsologtostderr:
[{"id":"sha256:2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755"],"repoTags":["docker.io/library/nginx:latest"],"size":"67189734"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a
8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":["docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17592393"},{"id":"sha256:89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"30339385"},{"id":"sha256:30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"31551652"},{"id":"sha256:3d18732f8686cc3c878055d99a
05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoD
igests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"21980661"},{"id":"sha256:64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"17058006"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc9
3efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:c6ce5ad549596f9869efa8456c2973eebcca6eceadfea477ba7c00b36614321f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-282713"],"size":"1007"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-282713 image ls --format json --alsologtostderr:
I1005 21:09:59.458533 1151148 out.go:296] Setting OutFile to fd 1 ...
I1005 21:09:59.458807 1151148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.458839 1151148 out.go:309] Setting ErrFile to fd 2...
I1005 21:09:59.458878 1151148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.459271 1151148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
I1005 21:09:59.460149 1151148 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.460310 1151148 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.460912 1151148 cli_runner.go:164] Run: docker container inspect functional-282713 --format={{.State.Status}}
I1005 21:09:59.487529 1151148 ssh_runner.go:195] Run: systemctl --version
I1005 21:09:59.487587 1151148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-282713
I1005 21:09:59.515981 1151148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/functional-282713/id_rsa Username:docker}
I1005 21:09:59.614824 1151148 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-282713 image ls --format yaml --alsologtostderr:
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:c6ce5ad549596f9869efa8456c2973eebcca6eceadfea477ba7c00b36614321f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-282713
size: "1007"
- id: sha256:df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests:
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
repoTags:
- docker.io/library/nginx:alpine
size: "17592393"
- id: sha256:2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
repoTags:
- docker.io/library/nginx:latest
size: "67189734"
- id: sha256:89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "30339385"
- id: sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "21980661"
- id: sha256:64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "17058006"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "31551652"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-282713 image ls --format yaml --alsologtostderr:
I1005 21:09:59.141349 1151088 out.go:296] Setting OutFile to fd 1 ...
I1005 21:09:59.141508 1151088 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.141516 1151088 out.go:309] Setting ErrFile to fd 2...
I1005 21:09:59.141522 1151088 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.141813 1151088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
I1005 21:09:59.142500 1151088 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.142660 1151088 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.143231 1151088 cli_runner.go:164] Run: docker container inspect functional-282713 --format={{.State.Status}}
I1005 21:09:59.167406 1151088 ssh_runner.go:195] Run: systemctl --version
I1005 21:09:59.167465 1151088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-282713
I1005 21:09:59.196933 1151088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/functional-282713/id_rsa Username:docker}
I1005 21:09:59.303679 1151088 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-282713 ssh pgrep buildkitd: exit status 1 (373.894465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image build -t localhost/my-image:functional-282713 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-282713 image build -t localhost/my-image:functional-282713 testdata/build --alsologtostderr: (2.527988167s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-282713 image build -t localhost/my-image:functional-282713 testdata/build --alsologtostderr:
I1005 21:09:59.814570 1151225 out.go:296] Setting OutFile to fd 1 ...
I1005 21:09:59.815259 1151225 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.815296 1151225 out.go:309] Setting ErrFile to fd 2...
I1005 21:09:59.815317 1151225 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:09:59.815630 1151225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
I1005 21:09:59.816454 1151225 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.817927 1151225 config.go:182] Loaded profile config "functional-282713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 21:09:59.818503 1151225 cli_runner.go:164] Run: docker container inspect functional-282713 --format={{.State.Status}}
I1005 21:09:59.846491 1151225 ssh_runner.go:195] Run: systemctl --version
I1005 21:09:59.846545 1151225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-282713
I1005 21:09:59.866200 1151225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/functional-282713/id_rsa Username:docker}
I1005 21:09:59.964658 1151225 build_images.go:151] Building image from path: /tmp/build.3281632732.tar
I1005 21:09:59.964732 1151225 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1005 21:09:59.975068 1151225 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3281632732.tar
I1005 21:09:59.979580 1151225 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3281632732.tar: stat -c "%s %y" /var/lib/minikube/build/build.3281632732.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3281632732.tar': No such file or directory
I1005 21:09:59.979611 1151225 ssh_runner.go:362] scp /tmp/build.3281632732.tar --> /var/lib/minikube/build/build.3281632732.tar (3072 bytes)
I1005 21:10:00.043985 1151225 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3281632732
I1005 21:10:00.112211 1151225 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3281632732 -xf /var/lib/minikube/build/build.3281632732.tar
I1005 21:10:00.134913 1151225 containerd.go:378] Building image: /var/lib/minikube/build/build.3281632732
I1005 21:10:00.135117 1151225 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3281632732 --local dockerfile=/var/lib/minikube/build/build.3281632732 --output type=image,name=localhost/my-image:functional-282713
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:561f5ff67c7b38649707c205ffb3171326c93c624f75a00934562f5d9460105a 0.0s done
#8 exporting config sha256:ece114927dcfb52a20eb36fbba9791c2b4013d145b603c327ede8201394b858c 0.0s done
#8 naming to localhost/my-image:functional-282713 done
#8 DONE 0.2s
I1005 21:10:02.222431 1151225 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3281632732 --local dockerfile=/var/lib/minikube/build/build.3281632732 --output type=image,name=localhost/my-image:functional-282713: (2.087253394s)
I1005 21:10:02.222534 1151225 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3281632732
I1005 21:10:02.233903 1151225 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3281632732.tar
I1005 21:10:02.244563 1151225 build_images.go:207] Built localhost/my-image:functional-282713 from /tmp/build.3281632732.tar
I1005 21:10:02.244590 1151225 build_images.go:123] succeeded building to: functional-282713
I1005 21:10:02.244602 1151225 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/10/05 21:09:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.060243579s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-282713
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image rm gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-282713
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-282713 image save --daemon gcr.io/google-containers/addon-resizer:functional-282713 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-282713
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-282713
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-282713
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-282713
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (94.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-027764 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1005 21:10:39.804026 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-027764 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m34.294866514s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (94.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons enable ingress --alsologtostderr -v=5: (10.485318319s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-027764 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-635129 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1005 21:12:55.956221 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:13:23.644276 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:14:00.094442 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.099987 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.110604 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.130929 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.171246 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.251553 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.411963 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:00.732571 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:01.373462 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:02.653803 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:05.214513 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:14:10.334704 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-635129 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m25.69525888s)
--- PASS: TestJSONOutput/start/Command (85.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-635129 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-635129 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-635129 --output=json --user=testUser
E1005 21:14:20.575718 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-635129 --output=json --user=testUser: (5.854426686s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-362574 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-362574 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.118199ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5073ec1a-b34a-4dae-b0f6-95c815d80ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-362574] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"35861dde-9829-4d34-862b-302c6c5e0149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"47526b61-0f44-4d5d-9491-acb7ea6f0d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3309d03a-6c87-4d9b-9a1a-a5938a1190cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig"}}
	{"specversion":"1.0","id":"c611de58-6ffe-4ebe-871d-23000ae7a492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube"}}
	{"specversion":"1.0","id":"be584d49-4dee-476c-b60a-7720a62f1aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8322fc48-59f1-4246-a33b-5c91b027c011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a593095-ad4b-4853-b9a6-e611c72b3469","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-362574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-362574
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-462627 --network=
E1005 21:14:41.056412 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-462627 --network=: (40.06317869s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-462627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-462627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-462627: (2.023138507s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-708104 --network=bridge
E1005 21:15:22.016641 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-708104 --network=bridge: (34.660095393s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-708104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-708104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-708104: (2.044967882s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.73s)

                                                
                                    
x
+
TestKicExistingNetwork (36.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-978851 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-978851 --network=existing-network: (34.029639378s)
helpers_test.go:175: Cleaning up "existing-network-978851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-978851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-978851: (2.03253843s)
--- PASS: TestKicExistingNetwork (36.22s)

                                                
                                    
x
+
TestKicCustomSubnet (38.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-877361 --subnet=192.168.60.0/24
E1005 21:16:43.936919 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:16:50.883181 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:50.888462 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:50.898704 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:50.918947 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:50.959215 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:51.039474 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:51.199824 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:51.520225 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:52.161069 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:53.441891 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:16:56.002816 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-877361 --subnet=192.168.60.0/24: (35.778232611s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-877361 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-877361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-877361
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-877361: (2.302685773s)
--- PASS: TestKicCustomSubnet (38.11s)

                                                
                                    
x
+
TestKicStaticIP (34.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-481099 --static-ip=192.168.200.200
E1005 21:17:01.123074 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:17:11.364258 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:17:31.845141 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-481099 --static-ip=192.168.200.200: (32.296182697s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-481099 ip
helpers_test.go:175: Cleaning up "static-ip-481099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-481099
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-481099: (2.071776393s)
--- PASS: TestKicStaticIP (34.53s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-323979 --driver=docker  --container-runtime=containerd
E1005 21:17:55.956791 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-323979 --driver=docker  --container-runtime=containerd: (30.519405914s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-326637 --driver=docker  --container-runtime=containerd
E1005 21:18:12.805383 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-326637 --driver=docker  --container-runtime=containerd: (33.448939343s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-323979
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-326637
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-326637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-326637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-326637: (2.017746325s)
helpers_test.go:175: Cleaning up "first-323979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-323979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-323979: (1.995975953s)
--- PASS: TestMinikubeProfile (69.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-364114 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-364114 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.306662172s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-364114 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-366436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1005 21:19:00.099958 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-366436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.143914223s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-366436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-364114 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-364114 --alsologtostderr -v=5: (1.669465365s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-366436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-366436
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-366436: (1.235792606s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-366436
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-366436: (6.323244408s)
--- PASS: TestMountStart/serial/RestartStopped (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-366436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821979 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1005 21:19:27.777169 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:19:34.725586 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821979 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m52.692063197s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-821979 -- rollout status deployment/busybox: (2.989193958s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-c5g8d -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-f5ktj -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-c5g8d -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-f5ktj -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-c5g8d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-f5ktj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-c5g8d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-c5g8d -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-f5ktj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-821979 -- exec busybox-5bc68d56bd-f5ktj -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-821979 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-821979 -v 3 --alsologtostderr: (16.81528258s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.53s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp testdata/cp-test.txt multinode-821979:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile412808656/001/cp-test_multinode-821979.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979:/home/docker/cp-test.txt multinode-821979-m02:/home/docker/cp-test_multinode-821979_multinode-821979-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m02 "sudo cat /home/docker/cp-test_multinode-821979_multinode-821979-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979:/home/docker/cp-test.txt multinode-821979-m03:/home/docker/cp-test_multinode-821979_multinode-821979-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m03 "sudo cat /home/docker/cp-test_multinode-821979_multinode-821979-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp testdata/cp-test.txt multinode-821979-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile412808656/001/cp-test_multinode-821979-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979-m02:/home/docker/cp-test.txt multinode-821979:/home/docker/cp-test_multinode-821979-m02_multinode-821979.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979 "sudo cat /home/docker/cp-test_multinode-821979-m02_multinode-821979.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979-m02:/home/docker/cp-test.txt multinode-821979-m03:/home/docker/cp-test_multinode-821979-m02_multinode-821979-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m03 "sudo cat /home/docker/cp-test_multinode-821979-m02_multinode-821979-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp testdata/cp-test.txt multinode-821979-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile412808656/001/cp-test_multinode-821979-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979-m03:/home/docker/cp-test.txt multinode-821979:/home/docker/cp-test_multinode-821979-m03_multinode-821979.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979 "sudo cat /home/docker/cp-test_multinode-821979-m03_multinode-821979.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 cp multinode-821979-m03:/home/docker/cp-test.txt multinode-821979-m02:/home/docker/cp-test_multinode-821979-m03_multinode-821979-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 ssh -n multinode-821979-m02 "sudo cat /home/docker/cp-test_multinode-821979-m03_multinode-821979-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-821979 node stop m03: (1.243756753s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821979 status: exit status 7 (564.260552ms)

                                                
                                                
-- stdout --
	multinode-821979
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-821979-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-821979-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr: exit status 7 (557.272592ms)

                                                
                                                
-- stdout --
	multinode-821979
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-821979-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-821979-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:21:44.428018 1198596 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:21:44.428255 1198596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:21:44.428286 1198596 out.go:309] Setting ErrFile to fd 2...
	I1005 21:21:44.428307 1198596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:21:44.428581 1198596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:21:44.428781 1198596 out.go:303] Setting JSON to false
	I1005 21:21:44.428901 1198596 mustload.go:65] Loading cluster: multinode-821979
	I1005 21:21:44.428980 1198596 notify.go:220] Checking for updates...
	I1005 21:21:44.429404 1198596 config.go:182] Loaded profile config "multinode-821979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:21:44.429433 1198596 status.go:255] checking status of multinode-821979 ...
	I1005 21:21:44.430736 1198596 cli_runner.go:164] Run: docker container inspect multinode-821979 --format={{.State.Status}}
	I1005 21:21:44.457154 1198596 status.go:330] multinode-821979 host status = "Running" (err=<nil>)
	I1005 21:21:44.457182 1198596 host.go:66] Checking if "multinode-821979" exists ...
	I1005 21:21:44.457606 1198596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-821979
	I1005 21:21:44.484907 1198596 host.go:66] Checking if "multinode-821979" exists ...
	I1005 21:21:44.485228 1198596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:21:44.485277 1198596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-821979
	I1005 21:21:44.507389 1198596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/multinode-821979/id_rsa Username:docker}
	I1005 21:21:44.601518 1198596 ssh_runner.go:195] Run: systemctl --version
	I1005 21:21:44.607023 1198596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:21:44.620559 1198596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:21:44.690408 1198596 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 21:21:44.679368353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:21:44.691112 1198596 kubeconfig.go:92] found "multinode-821979" server: "https://192.168.58.2:8443"
	I1005 21:21:44.691148 1198596 api_server.go:166] Checking apiserver status ...
	I1005 21:21:44.691221 1198596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:21:44.704700 1198596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	I1005 21:21:44.717146 1198596 api_server.go:182] apiserver freezer: "3:freezer:/docker/9c0a39df1a2ee435cead17851915e88fa28b1db9fa86ad92718a56125c01c03d/kubepods/burstable/podd9de9e2292e723255e1f8012e2ab8cce/7c4a151654383c203413ec99f6ca291ed8645b53ce2c07a42031d0b858450704"
	I1005 21:21:44.717234 1198596 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9c0a39df1a2ee435cead17851915e88fa28b1db9fa86ad92718a56125c01c03d/kubepods/burstable/podd9de9e2292e723255e1f8012e2ab8cce/7c4a151654383c203413ec99f6ca291ed8645b53ce2c07a42031d0b858450704/freezer.state
	I1005 21:21:44.727794 1198596 api_server.go:204] freezer state: "THAWED"
	I1005 21:21:44.727824 1198596 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1005 21:21:44.736643 1198596 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1005 21:21:44.736672 1198596 status.go:421] multinode-821979 apiserver status = Running (err=<nil>)
	I1005 21:21:44.736682 1198596 status.go:257] multinode-821979 status: &{Name:multinode-821979 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 21:21:44.736699 1198596 status.go:255] checking status of multinode-821979-m02 ...
	I1005 21:21:44.737011 1198596 cli_runner.go:164] Run: docker container inspect multinode-821979-m02 --format={{.State.Status}}
	I1005 21:21:44.755612 1198596 status.go:330] multinode-821979-m02 host status = "Running" (err=<nil>)
	I1005 21:21:44.755643 1198596 host.go:66] Checking if "multinode-821979-m02" exists ...
	I1005 21:21:44.756058 1198596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-821979-m02
	I1005 21:21:44.774080 1198596 host.go:66] Checking if "multinode-821979-m02" exists ...
	I1005 21:21:44.774525 1198596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:21:44.774573 1198596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-821979-m02
	I1005 21:21:44.792984 1198596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17363-1112519/.minikube/machines/multinode-821979-m02/id_rsa Username:docker}
	I1005 21:21:44.886678 1198596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:21:44.901588 1198596 status.go:257] multinode-821979-m02 status: &{Name:multinode-821979-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1005 21:21:44.901624 1198596 status.go:255] checking status of multinode-821979-m03 ...
	I1005 21:21:44.901951 1198596 cli_runner.go:164] Run: docker container inspect multinode-821979-m03 --format={{.State.Status}}
	I1005 21:21:44.921232 1198596 status.go:330] multinode-821979-m03 host status = "Stopped" (err=<nil>)
	I1005 21:21:44.921253 1198596 status.go:343] host is not running, skipping remaining checks
	I1005 21:21:44.921260 1198596 status.go:257] multinode-821979-m03 status: &{Name:multinode-821979-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 node start m03 --alsologtostderr
E1005 21:21:50.883326 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-821979 node start m03 --alsologtostderr: (11.779706446s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-821979
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-821979
E1005 21:22:18.567426 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-821979: (25.142192891s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821979 --wait=true -v=8 --alsologtostderr
E1005 21:22:55.956043 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821979 --wait=true -v=8 --alsologtostderr: (1m35.766899598s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-821979
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 node delete m03
E1005 21:24:00.120763 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-821979 node delete m03: (4.322785204s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 stop
E1005 21:24:19.004567 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-821979 stop: (23.956018824s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821979 status: exit status 7 (88.578894ms)

                                                
                                                
-- stdout --
	multinode-821979
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-821979-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr: exit status 7 (89.546085ms)

                                                
                                                
-- stdout --
	multinode-821979
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-821979-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:24:27.782172 1207264 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:24:27.782442 1207264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:24:27.782470 1207264 out.go:309] Setting ErrFile to fd 2...
	I1005 21:24:27.782490 1207264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:24:27.782768 1207264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:24:27.782992 1207264 out.go:303] Setting JSON to false
	I1005 21:24:27.783139 1207264 mustload.go:65] Loading cluster: multinode-821979
	I1005 21:24:27.783238 1207264 notify.go:220] Checking for updates...
	I1005 21:24:27.783632 1207264 config.go:182] Loaded profile config "multinode-821979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:24:27.783667 1207264 status.go:255] checking status of multinode-821979 ...
	I1005 21:24:27.785467 1207264 cli_runner.go:164] Run: docker container inspect multinode-821979 --format={{.State.Status}}
	I1005 21:24:27.803622 1207264 status.go:330] multinode-821979 host status = "Stopped" (err=<nil>)
	I1005 21:24:27.803642 1207264 status.go:343] host is not running, skipping remaining checks
	I1005 21:24:27.803649 1207264 status.go:257] multinode-821979 status: &{Name:multinode-821979 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 21:24:27.803691 1207264 status.go:255] checking status of multinode-821979-m02 ...
	I1005 21:24:27.803986 1207264 cli_runner.go:164] Run: docker container inspect multinode-821979-m02 --format={{.State.Status}}
	I1005 21:24:27.821181 1207264 status.go:330] multinode-821979-m02 host status = "Stopped" (err=<nil>)
	I1005 21:24:27.821202 1207264 status.go:343] host is not running, skipping remaining checks
	I1005 21:24:27.821210 1207264 status.go:257] multinode-821979-m02 status: &{Name:multinode-821979-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821979 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821979 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.39128298s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-821979 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-821979
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821979-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-821979-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.494879ms)

                                                
                                                
-- stdout --
	* [multinode-821979-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-821979-m02' is duplicated with machine name 'multinode-821979-m02' in profile 'multinode-821979'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-821979-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-821979-m03 --driver=docker  --container-runtime=containerd: (34.837078183s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-821979
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-821979: exit status 80 (361.358675ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-821979
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-821979-m03 already exists in multinode-821979-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-821979-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-821979-m03: (2.033215904s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.38s)

                                                
                                    
x
+
TestPreload (152.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-075124 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1005 21:26:50.882922 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-075124 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m17.027201066s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-075124 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-075124 image pull gcr.io/k8s-minikube/busybox: (1.391476151s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-075124
E1005 21:27:55.956500 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-075124: (12.042746161s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-075124 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-075124 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (58.9287573s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-075124 image list
helpers_test.go:175: Cleaning up "test-preload-075124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-075124
E1005 21:29:00.093795 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-075124: (2.446544477s)
--- PASS: TestPreload (152.11s)

                                                
                                    
x
+
TestScheduledStopUnix (106.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-297095 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-297095 --memory=2048 --driver=docker  --container-runtime=containerd: (29.839168933s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-297095 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-297095 -n scheduled-stop-297095
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-297095 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-297095 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-297095 -n scheduled-stop-297095
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-297095
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-297095 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1005 21:30:23.137437 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-297095
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-297095: exit status 7 (72.450372ms)

                                                
                                                
-- stdout --
	scheduled-stop-297095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-297095 -n scheduled-stop-297095
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-297095 -n scheduled-stop-297095: exit status 7 (76.843179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-297095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-297095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-297095: (4.691005401s)
--- PASS: TestScheduledStopUnix (106.18s)

                                                
                                    
x
+
TestInsufficientStorage (11.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-885224 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-885224 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.118880497s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a02c3c9-48e6-488f-af96-f9a05bf098df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-885224] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e38445b-0729-405e-bdeb-5bdb9bf22f91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"5fe3c0b5-f926-48b4-87d3-efce8b49af91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"009fa427-d39c-447a-a4c9-daabcc3ab5af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig"}}
	{"specversion":"1.0","id":"c6dd047a-51df-4117-b117-0c305dc53994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube"}}
	{"specversion":"1.0","id":"36596ea9-1d60-481b-bbfb-a25581546af4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"68b678bd-fdc3-400a-82f7-e2ffc4b4b2e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"11e2bc4d-6090-4be0-9b6c-db699daf5108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"423547d4-e84b-41e0-b01c-f8eb540742b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fd2aed49-a9bf-4d3b-96c6-f3c9d6b8d870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fba3c986-b22d-4b74-823b-f9f2882da4dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b265c495-20f1-43bd-a8fe-afdb8ea76995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-885224 in cluster insufficient-storage-885224","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad843c49-c790-419c-937e-f80af4ee5b9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"53d4cffb-9e7d-4be2-b66a-0f84d26f1476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd8f6021-a00f-442b-957d-6837b13821a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-885224 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-885224 --output=json --layout=cluster: exit status 7 (310.202863ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-885224","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-885224","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 21:30:58.059998 1224688 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-885224" does not appear in /home/jenkins/minikube-integration/17363-1112519/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-885224 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-885224 --output=json --layout=cluster: exit status 7 (326.842165ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-885224","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-885224","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 21:30:58.387999 1224742 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-885224" does not appear in /home/jenkins/minikube-integration/17363-1112519/kubeconfig
	E1005 21:30:58.400016 1224742 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/insufficient-storage-885224/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-885224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-885224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-885224: (2.002883796s)
--- PASS: TestInsufficientStorage (11.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.1035876774.exe start -p running-upgrade-612516 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.1035876774.exe start -p running-upgrade-612516 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.840853176s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-612516 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1005 21:36:50.883164 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-612516 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.907443519s)
helpers_test.go:175: Cleaning up "running-upgrade-612516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-612516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-612516: (2.86817167s)
--- PASS: TestRunningBinaryUpgrade (90.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (433.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.992184925s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-099136
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-099136: (1.293022989s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-099136 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-099136 status --format={{.Host}}: exit status 7 (69.893627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m35.451124205s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-099136 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (126.355075ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-099136] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-099136
	    minikube start -p kubernetes-upgrade-099136 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0991362 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-099136 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-099136 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.75180226s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-099136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-099136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-099136: (2.348926581s)
--- PASS: TestKubernetesUpgrade (433.23s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176.54s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.775720441.exe start -p missing-upgrade-654072 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.775720441.exe start -p missing-upgrade-654072 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.808593988s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-654072
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-654072: (11.231003688s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-654072
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-654072 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1005 21:32:55.956265 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:33:13.928573 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-654072 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.44264025s)
helpers_test.go:175: Cleaning up "missing-upgrade-654072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-654072
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-654072: (2.383022868s)
--- PASS: TestMissingContainerUpgrade (176.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-259976 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-259976 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (79.63395ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-259976] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-259976 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-259976 --driver=docker  --container-runtime=containerd: (40.380998621s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-259976 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-259976 --no-kubernetes --driver=docker  --container-runtime=containerd
E1005 21:31:50.883344 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-259976 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.177172648s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-259976 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-259976 status -o json: exit status 2 (328.192872ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-259976","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-259976
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-259976: (1.912212396s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-259976 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-259976 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.580790063s)
--- PASS: TestNoKubernetes/serial/Start (5.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-259976 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-259976 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.425737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-259976
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-259976: (1.245241476s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-259976 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-259976 --driver=docker  --container-runtime=containerd: (7.408520078s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-259976 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-259976 "sudo systemctl is-active --quiet service kubelet": exit status 1 (355.517782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.3551616315.exe start -p stopped-upgrade-624587 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1005 21:34:00.095191 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.3551616315.exe start -p stopped-upgrade-624587 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.484168132s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.3551616315.exe -p stopped-upgrade-624587 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.3551616315.exe -p stopped-upgrade-624587 stop: (20.218359629s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-624587 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-624587 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.85545329s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-624587
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-624587: (1.160611241s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/Start (92.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-523512 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1005 21:37:55.956485 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-523512 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.012745845s)
--- PASS: TestPause/serial/Start (92.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-523512 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-523512 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.399743542s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.42s)

                                                
                                    
x
+
TestPause/serial/Pause (1.4s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-523512 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-523512 --alsologtostderr -v=5: (1.402084723s)
--- PASS: TestPause/serial/Pause (1.40s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-523512 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-523512 --output=json --layout=cluster: exit status 2 (528.151225ms)

                                                
                                                
-- stdout --
	{"Name":"pause-523512","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-523512","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.53s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-523512 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.98s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.33s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-523512 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-523512 --alsologtostderr -v=5: (1.33227848s)
--- PASS: TestPause/serial/PauseAgain (1.33s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-523512 --alsologtostderr -v=5
E1005 21:39:00.094899 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-523512 --alsologtostderr -v=5: (2.996034762s)
--- PASS: TestPause/serial/DeletePaused (3.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-523512
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-523512: exit status 1 (23.132883ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-523512: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-233036 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-233036 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (352.552332ms)

                                                
                                                
-- stdout --
	* [false-233036] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:39:34.402881 1263224 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:39:34.403062 1263224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:39:34.403074 1263224 out.go:309] Setting ErrFile to fd 2...
	I1005 21:39:34.403080 1263224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:39:34.403336 1263224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1112519/.minikube/bin
	I1005 21:39:34.403769 1263224 out.go:303] Setting JSON to false
	I1005 21:39:34.404881 1263224 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26521,"bootTime":1696515454,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1005 21:39:34.404957 1263224 start.go:138] virtualization:  
	I1005 21:39:34.407704 1263224 out.go:177] * [false-233036] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:39:34.409774 1263224 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:39:34.411723 1263224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:39:34.410043 1263224 notify.go:220] Checking for updates...
	I1005 21:39:34.415969 1263224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1112519/kubeconfig
	I1005 21:39:34.418028 1263224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1112519/.minikube
	I1005 21:39:34.420541 1263224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:39:34.422681 1263224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:39:34.425428 1263224 config.go:182] Loaded profile config "force-systemd-flag-847152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 21:39:34.425595 1263224 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:39:34.489138 1263224 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:39:34.489231 1263224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:39:34.643883 1263224 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:39:34.631371713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:39:34.643982 1263224 docker.go:294] overlay module found
	I1005 21:39:34.646407 1263224 out.go:177] * Using the docker driver based on user configuration
	I1005 21:39:34.648046 1263224 start.go:298] selected driver: docker
	I1005 21:39:34.648063 1263224 start.go:902] validating driver "docker" against <nil>
	I1005 21:39:34.648097 1263224 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:39:34.650458 1263224 out.go:177] 
	W1005 21:39:34.652317 1263224 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1005 21:39:34.654208 1263224 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-233036 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-233036" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-233036

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-233036"

                                                
                                                
----------------------- debugLogs end: false-233036 [took: 4.947048419s] --------------------------------
helpers_test.go:175: Cleaning up "false-233036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-233036
--- PASS: TestNetworkPlugins/group/false (5.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-591616 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1005 21:41:50.883725 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:42:55.956893 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-591616 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m9.232933926s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-591616 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a371d2e5-391f-4b83-93ae-61d876fae412] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a371d2e5-391f-4b83-93ae-61d876fae412] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.031596121s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-591616 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-591616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-591616 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-591616 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-591616 --alsologtostderr -v=3: (12.308120711s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-591616 -n old-k8s-version-591616
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-591616 -n old-k8s-version-591616: exit status 7 (103.740328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-591616 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (659.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-591616 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-591616 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (10m58.74990282s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-591616 -n old-k8s-version-591616
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (659.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-080782 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:44:00.104003 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-080782 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m13.265203697s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-080782 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af328744-ebec-45c2-a9c3-91117ee9765b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af328744-ebec-45c2-a9c3-91117ee9765b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.034665447s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-080782 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-080782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-080782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070193005s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-080782 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-080782 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-080782 --alsologtostderr -v=3: (12.176010753s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-080782 -n no-preload-080782
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-080782 -n no-preload-080782: exit status 7 (78.053917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-080782 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-080782 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:46:50.883125 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:47:03.139874 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:47:55.956397 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:49:00.095812 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
E1005 21:49:53.929303 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-080782 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m35.539351219s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-080782 -n no-preload-080782
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nbmt6" [2fdd3d35-3aca-45b6-818a-cd30d9982d14] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nbmt6" [2fdd3d35-3aca-45b6-818a-cd30d9982d14] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.025868373s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nbmt6" [2fdd3d35-3aca-45b6-818a-cd30d9982d14] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011296314s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-080782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-080782 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-080782 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-080782 -n no-preload-080782
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-080782 -n no-preload-080782: exit status 2 (346.64533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-080782 -n no-preload-080782
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-080782 -n no-preload-080782: exit status 2 (361.404736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-080782 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-080782 -n no-preload-080782
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-080782 -n no-preload-080782
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-221130 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:51:50.883280 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-221130 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (58.750396237s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-221130 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [89facf2b-d433-4590-989e-67022458a11d] Pending
helpers_test.go:344: "busybox" [89facf2b-d433-4590-989e-67022458a11d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [89facf2b-d433-4590-989e-67022458a11d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.05175453s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-221130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-221130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-221130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065654026s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-221130 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-221130 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-221130 --alsologtostderr -v=3: (12.179320657s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-221130 -n embed-certs-221130
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-221130 -n embed-certs-221130: exit status 7 (73.347293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-221130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-221130 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:52:55.956163 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:54:00.093903 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-221130 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m42.511868292s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-221130 -n embed-certs-221130
E1005 21:58:19.056888 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-624wd" [87028d25-3910-4818-9eda-00140691cab5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023324102s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-624wd" [87028d25-3910-4818-9eda-00140691cab5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027755746s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-591616 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-591616 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-591616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591616 -n old-k8s-version-591616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591616 -n old-k8s-version-591616: exit status 2 (345.928712ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-591616 -n old-k8s-version-591616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-591616 -n old-k8s-version-591616: exit status 2 (356.534455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-591616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591616 -n old-k8s-version-591616
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-591616 -n old-k8s-version-591616
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-140654 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:54:53.592702 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:53.597931 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:53.608176 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:53.628418 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:53.668657 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:53.748905 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:53.910047 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:54.230796 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:54.871641 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:56.152549 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:54:58.712984 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:55:03.833383 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:55:14.073879 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:55:34.555021 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-140654 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m0.619952234s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-140654 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ba7e3b6-f00c-4d88-9f15-e34894f47297] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ba7e3b6-f00c-4d88-9f15-e34894f47297] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.029501564s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-140654 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-140654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-140654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126051807s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-140654 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-140654 --alsologtostderr -v=3
E1005 21:56:15.515276 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-140654 --alsologtostderr -v=3: (12.200297828s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654: exit status 7 (78.776875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-140654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-140654 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:56:50.883159 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 21:57:37.435972 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 21:57:39.005006 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:57:55.956169 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
E1005 21:58:13.933755 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:13.939036 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:13.949285 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:13.969491 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:14.009774 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:14.090657 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:14.251007 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:14.571497 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:15.212253 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:58:16.492970 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-140654 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m38.069006249s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sfm6k" [d5554c02-f357-4da1-aadf-56610a78a3ed] Running
E1005 21:58:24.178062 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026322928s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sfm6k" [d5554c02-f357-4da1-aadf-56610a78a3ed] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011312911s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-221130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-221130 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-221130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-221130 -n embed-certs-221130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-221130 -n embed-certs-221130: exit status 2 (364.146601ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-221130 -n embed-certs-221130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-221130 -n embed-certs-221130: exit status 2 (360.335969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-221130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-221130 -n embed-certs-221130
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-221130 -n embed-certs-221130
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-744439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:58:54.899662 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:59:00.094167 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-744439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (44.131061278s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-744439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-744439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.2313491s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-744439 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-744439 --alsologtostderr -v=3: (1.278721025s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-744439 -n newest-cni-744439
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-744439 -n newest-cni-744439: exit status 7 (74.351064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-744439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-744439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:59:35.860661 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
E1005 21:59:53.591825 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-744439 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.2: (32.966946363s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-744439 -n newest-cni-744439
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-744439 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-744439 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-744439 -n newest-cni-744439
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-744439 -n newest-cni-744439: exit status 2 (360.812795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-744439 -n newest-cni-744439
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-744439 -n newest-cni-744439: exit status 2 (398.555753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-744439 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-744439 -n newest-cni-744439
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-744439 -n newest-cni-744439
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1005 22:00:21.276266 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
E1005 22:00:57.781338 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m1.747719127s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-98qdk" [e7a21440-ec0f-4349-99ac-6483570c81a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-98qdk" [e7a21440-ec0f-4349-99ac-6483570c81a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.013959353s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1005 22:01:50.883134 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m25.278682509s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qdrj" [40f57163-8e1c-45b4-a7d4-d7ec7bb61f61] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qdrj" [40f57163-8e1c-45b4-a7d4-d7ec7bb61f61] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.030382368s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7qdrj" [40f57163-8e1c-45b4-a7d4-d7ec7bb61f61] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012589512s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-140654 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-140654 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-140654 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-140654 --alsologtostderr -v=1: (1.023130759s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654: exit status 2 (391.195272ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654: exit status 2 (410.030342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-140654 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-140654 -n default-k8s-diff-port-140654
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.81s)
E1005 22:07:15.536856 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:07:27.225613 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:07:55.956922 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1005 22:02:55.956705 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/addons-223209/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m21.9565893s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7kzk2" [5eb6cc97-9809-4d61-92d6-feb2e052cc9e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.046620261s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nmhqv" [03866c8b-3139-4987-a471-a83ec06b518b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1005 22:03:13.933334 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/old-k8s-version-591616/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nmhqv" [03866c8b-3139-4987-a471-a83ec06b518b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.014741352s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kxvxj" [4981d6ec-1533-4263-a40c-797490f0f2a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.038992322s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.519755216s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vr76s" [123aa3d0-4bb8-4c0b-9eb5-b3f54378ef67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vr76s" [123aa3d0-4bb8-4c0b-9eb5-b3f54378ef67] Running
E1005 22:04:00.094320 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/functional-282713/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.018568745s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m26.322977069s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-krgxn" [d8b8934a-619a-4b61-b46d-c6a2eab5f7b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1005 22:04:53.591846 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/no-preload-080782/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-krgxn" [d8b8934a-619a-4b61-b46d-c6a2eab5f7b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011825775s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1005 22:05:53.614342 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:53.619555 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:53.629753 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:53.649990 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:53.690226 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:53.770466 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:53.930788 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:54.251294 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:54.891793 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:05:56.172721 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m2.533809443s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v7kg6" [fcd9b95b-319c-42a1-8848-d26f74ff38f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1005 22:05:58.732892 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-v7kg6" [fcd9b95b-319c-42a1-8848-d26f74ff38f8] Running
E1005 22:06:03.854119 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
E1005 22:06:05.302792 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.308114 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.318391 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.338682 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.379039 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.459418 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.620575 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:05.941351 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:06.581783 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
E1005 22:06:07.861918 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.012104568s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-233036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m29.237964343s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wtrdk" [5d79cf08-1fcc-41c7-9bc9-175862ba072f] Running
E1005 22:06:33.929507 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/ingress-addon-legacy-027764/client.crt: no such file or directory
E1005 22:06:34.575911 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/default-k8s-diff-port-140654/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.062718621s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gkbx7" [5928019e-8fa9-4b7c-ae7a-84ad5d573dfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gkbx7" [5928019e-8fa9-4b7c-ae7a-84ad5d573dfb] Running
E1005 22:06:46.265405 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/auto-233036/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.061121575s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-233036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-233036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7pbq9" [4e158d57-381b-42c0-84da-db3caee51412] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7pbq9" [4e158d57-381b-42c0-84da-db3caee51412] Running
E1005 22:08:06.966949 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:06.972249 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:06.982608 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:07.005664 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:07.045989 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:07.126872 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:07.287243 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:07.607775 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:08.248007 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
E1005 22:08:09.528835 1117903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/kindnet-233036/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011135235s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-233036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-233036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (28/307)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-853390 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-853390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-853390
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:442: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:496: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-440921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-440921
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-233036 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-233036" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-233036

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-233036"

                                                
                                                
----------------------- debugLogs end: kubenet-233036 [took: 5.433713876s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-233036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-233036
--- SKIP: TestNetworkPlugins/group/kubenet (5.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-233036 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-233036" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-1112519/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:39:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-847152
contexts:
- context:
cluster: force-systemd-flag-847152
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:39:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: force-systemd-flag-847152
name: force-systemd-flag-847152
current-context: force-systemd-flag-847152
kind: Config
preferences: {}
users:
- name: force-systemd-flag-847152
user:
client-certificate: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/force-systemd-flag-847152/client.crt
client-key: /home/jenkins/minikube-integration/17363-1112519/.minikube/profiles/force-systemd-flag-847152/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-233036

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-233036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-233036"

                                                
                                                
----------------------- debugLogs end: cilium-233036 [took: 5.559138077s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-233036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-233036
--- SKIP: TestNetworkPlugins/group/cilium (5.78s)

                                                
                                    
Copied to clipboard