Test Report: Docker_Linux_containerd_arm64 17822

                    
                      1b14f6e8a127ccddfb64acb15c203e20bb49b800:2023-12-19:32341
                    
                

Test fail (9/315)

x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-apiserver_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-scheduler_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-proxy_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/kube-proxy_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/pause_3.9" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/pause_3.9: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/etcd_3.5.10-0" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/etcd_3.5.10-0: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/coredns/coredns_v1.11.1" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/registry.k8s.io/coredns/coredns_v1.11.1: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/linux/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
--- FAIL: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-505406 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-505406 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-505406 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f9ecb559-e6e8-4975-9ec1-9e474924355a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f9ecb559-e6e8-4975-9ec1-9e474924355a] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004306156s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-505406 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.073009316s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-505406 addons disable ingress-dns --alsologtostderr -v=1: (1.064373293s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-505406 addons disable ingress --alsologtostderr -v=1: (8.084452948s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-505406
helpers_test.go:235: (dbg) docker inspect addons-505406:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728",
	        "Created": "2023-12-18T23:27:01.007821564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4010841,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T23:27:01.352233717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/hosts",
	        "LogPath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728-json.log",
	        "Name": "/addons-505406",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-505406:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-505406",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc-init/diff:/var/lib/docker/overlay2/348b7bce1eeb3fbac023de8c50816ddfb5fe3d6cead44e087fa78b4f572e0dfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-505406",
	                "Source": "/var/lib/docker/volumes/addons-505406/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-505406",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-505406",
	                "name.minikube.sigs.k8s.io": "addons-505406",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "544c8b68e7e92c659164975305e0f5f4fe521f9bb758d6e982126866ea4ea66f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42671"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42669"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/544c8b68e7e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-505406": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d20ab8e9043d",
	                        "addons-505406"
	                    ],
	                    "NetworkID": "db63daa0791f94b269968270441aa9a8b30c2c70c5566ef50d71b7852e649ada",
	                    "EndpointID": "9442a295e48eefaf997ea0089a610105213721442a481356e87d507910dc59a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-505406 -n addons-505406
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-505406 logs -n 25: (1.719784933s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:25 UTC |                     |
	|         | -p download-only-037071              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | -p download-only-037071              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | -p download-only-037071              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| delete  | -p download-only-037071              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| delete  | -p download-only-037071              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| start   | --download-only -p                   | download-docker-388185 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | download-docker-388185               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-388185            | download-docker-388185 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| start   | --download-only -p                   | binary-mirror-180531   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | binary-mirror-180531                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34359               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-180531              | binary-mirror-180531   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| addons  | enable dashboard -p                  | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | addons-505406                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | addons-505406                        |                        |         |         |                     |                     |
	| start   | -p addons-505406 --wait=true         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:28 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-505406 ip                     | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	| addons  | addons-505406 addons disable         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-505406 addons                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | addons-505406                        |                        |         |         |                     |                     |
	| ssh     | addons-505406 ssh curl -s            | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-505406 ip                     | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	| addons  | addons-505406 addons disable         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-505406 addons disable         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:30 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-505406 addons                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC |                     |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-505406 addons                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:26:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:26:37.292200 4010377 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:26:37.292409 4010377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:37.292418 4010377 out.go:309] Setting ErrFile to fd 2...
	I1218 23:26:37.292425 4010377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:37.292692 4010377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:26:37.293200 4010377 out.go:303] Setting JSON to false
	I1218 23:26:37.294050 4010377 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":198541,"bootTime":1702743457,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:26:37.294124 4010377 start.go:138] virtualization:  
	I1218 23:26:37.296498 4010377 out.go:177] * [addons-505406] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:26:37.299248 4010377 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:26:37.301246 4010377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:26:37.299439 4010377 notify.go:220] Checking for updates...
	I1218 23:26:37.304942 4010377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:26:37.307028 4010377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:26:37.308811 4010377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:26:37.310563 4010377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:26:37.312861 4010377 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:26:37.340219 4010377 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:26:37.340410 4010377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:37.422107 4010377 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:37.412229734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:37.422206 4010377 docker.go:295] overlay module found
	I1218 23:26:37.425562 4010377 out.go:177] * Using the docker driver based on user configuration
	I1218 23:26:37.427324 4010377 start.go:298] selected driver: docker
	I1218 23:26:37.427343 4010377 start.go:902] validating driver "docker" against <nil>
	I1218 23:26:37.427357 4010377 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:26:37.428016 4010377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:37.494435 4010377 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:37.485016879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:37.494589 4010377 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:26:37.494818 4010377 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 23:26:37.497252 4010377 out.go:177] * Using Docker driver with root privileges
	I1218 23:26:37.499009 4010377 cni.go:84] Creating CNI manager for ""
	I1218 23:26:37.499027 4010377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:26:37.499039 4010377 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:26:37.499056 4010377 start_flags.go:323] config:
	{Name:addons-505406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:26:37.501089 4010377 out.go:177] * Starting control plane node addons-505406 in cluster addons-505406
	I1218 23:26:37.502905 4010377 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1218 23:26:37.504987 4010377 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:26:37.506887 4010377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:26:37.506950 4010377 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1218 23:26:37.506963 4010377 cache.go:56] Caching tarball of preloaded images
	I1218 23:26:37.506971 4010377 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:26:37.507040 4010377 preload.go:174] Found /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 23:26:37.507050 4010377 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1218 23:26:37.507428 4010377 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/config.json ...
	I1218 23:26:37.507457 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/config.json: {Name:mk30dd6bf76cefa6c7749527f9b98923bb68ed32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:26:37.523899 4010377 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:26:37.524034 4010377 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:26:37.524054 4010377 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 23:26:37.524059 4010377 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 23:26:37.524067 4010377 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 23:26:37.524073 4010377 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from local cache
	I1218 23:26:53.557708 4010377 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from cached tarball
	I1218 23:26:53.557750 4010377 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:26:53.557818 4010377 start.go:365] acquiring machines lock for addons-505406: {Name:mk2ccdf55f1151729aacb931c7e8fe9ebfb0ea80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:53.557962 4010377 start.go:369] acquired machines lock for "addons-505406" in 117.653µs
	I1218 23:26:53.557993 4010377 start.go:93] Provisioning new machine with config: &{Name:addons-505406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 23:26:53.558079 4010377 start.go:125] createHost starting for "" (driver="docker")
	I1218 23:26:53.560706 4010377 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1218 23:26:53.561009 4010377 start.go:159] libmachine.API.Create for "addons-505406" (driver="docker")
	I1218 23:26:53.561044 4010377 client.go:168] LocalClient.Create starting
	I1218 23:26:53.561167 4010377 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem
	I1218 23:26:54.124770 4010377 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem
	I1218 23:26:54.703818 4010377 cli_runner.go:164] Run: docker network inspect addons-505406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 23:26:54.720948 4010377 cli_runner.go:211] docker network inspect addons-505406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 23:26:54.721028 4010377 network_create.go:281] running [docker network inspect addons-505406] to gather additional debugging logs...
	I1218 23:26:54.721049 4010377 cli_runner.go:164] Run: docker network inspect addons-505406
	W1218 23:26:54.738053 4010377 cli_runner.go:211] docker network inspect addons-505406 returned with exit code 1
	I1218 23:26:54.738084 4010377 network_create.go:284] error running [docker network inspect addons-505406]: docker network inspect addons-505406: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-505406 not found
	I1218 23:26:54.738098 4010377 network_create.go:286] output of [docker network inspect addons-505406]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-505406 not found
	
	** /stderr **
	I1218 23:26:54.738239 4010377 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:26:54.755228 4010377 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024f8870}
	I1218 23:26:54.755266 4010377 network_create.go:124] attempt to create docker network addons-505406 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 23:26:54.755321 4010377 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-505406 addons-505406
	I1218 23:26:54.825204 4010377 network_create.go:108] docker network addons-505406 192.168.49.0/24 created
	I1218 23:26:54.825237 4010377 kic.go:121] calculated static IP "192.168.49.2" for the "addons-505406" container
	I1218 23:26:54.825326 4010377 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:26:54.841985 4010377 cli_runner.go:164] Run: docker volume create addons-505406 --label name.minikube.sigs.k8s.io=addons-505406 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:26:54.861144 4010377 oci.go:103] Successfully created a docker volume addons-505406
	I1218 23:26:54.861243 4010377 cli_runner.go:164] Run: docker run --rm --name addons-505406-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505406 --entrypoint /usr/bin/test -v addons-505406:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:26:56.728701 4010377 cli_runner.go:217] Completed: docker run --rm --name addons-505406-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505406 --entrypoint /usr/bin/test -v addons-505406:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.867415277s)
	I1218 23:26:56.728739 4010377 oci.go:107] Successfully prepared a docker volume addons-505406
	I1218 23:26:56.728772 4010377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:26:56.728799 4010377 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:26:56.728909 4010377 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-505406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:27:00.917123 4010377 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-505406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.188157128s)
	I1218 23:27:00.917160 4010377 kic.go:203] duration metric: took 4.188357 seconds to extract preloaded images to volume
	W1218 23:27:00.917313 4010377 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:27:00.917424 4010377 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:27:00.990134 4010377 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-505406 --name addons-505406 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505406 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-505406 --network addons-505406 --ip 192.168.49.2 --volume addons-505406:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:27:01.362236 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Running}}
	I1218 23:27:01.393734 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:01.417654 4010377 cli_runner.go:164] Run: docker exec addons-505406 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:27:01.472638 4010377 oci.go:144] the created container "addons-505406" has a running status.
	I1218 23:27:01.472666 4010377 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa...
	I1218 23:27:01.987378 4010377 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:27:02.022934 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:02.047436 4010377 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:27:02.047461 4010377 kic_runner.go:114] Args: [docker exec --privileged addons-505406 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:27:02.110721 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:02.136592 4010377 machine.go:88] provisioning docker machine ...
	I1218 23:27:02.136625 4010377 ubuntu.go:169] provisioning hostname "addons-505406"
	I1218 23:27:02.136699 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:02.168253 4010377 main.go:141] libmachine: Using SSH client type: native
	I1218 23:27:02.168724 4010377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 42671 <nil> <nil>}
	I1218 23:27:02.168748 4010377 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-505406 && echo "addons-505406" | sudo tee /etc/hostname
	I1218 23:27:02.377388 4010377 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-505406
	
	I1218 23:27:02.377490 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:02.405625 4010377 main.go:141] libmachine: Using SSH client type: native
	I1218 23:27:02.406097 4010377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 42671 <nil> <nil>}
	I1218 23:27:02.406120 4010377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-505406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-505406/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-505406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:27:02.566156 4010377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:27:02.566223 4010377 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-4004447/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-4004447/.minikube}
	I1218 23:27:02.566270 4010377 ubuntu.go:177] setting up certificates
	I1218 23:27:02.566304 4010377 provision.go:83] configureAuth start
	I1218 23:27:02.566410 4010377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505406
	I1218 23:27:02.588425 4010377 provision.go:138] copyHostCerts
	I1218 23:27:02.588500 4010377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.pem (1082 bytes)
	I1218 23:27:02.588617 4010377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/cert.pem (1123 bytes)
	I1218 23:27:02.588714 4010377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/key.pem (1675 bytes)
	I1218 23:27:02.588769 4010377 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem org=jenkins.addons-505406 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-505406]
	I1218 23:27:03.113412 4010377 provision.go:172] copyRemoteCerts
	I1218 23:27:03.113510 4010377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:27:03.113559 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.133253 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.239923 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 23:27:03.269748 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 23:27:03.301192 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 23:27:03.331066 4010377 provision.go:86] duration metric: configureAuth took 764.719628ms
	I1218 23:27:03.331125 4010377 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:27:03.331322 4010377 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:03.331336 4010377 machine.go:91] provisioned docker machine in 1.194723437s
	I1218 23:27:03.331343 4010377 client.go:171] LocalClient.Create took 9.770288418s
	I1218 23:27:03.331361 4010377 start.go:167] duration metric: libmachine.API.Create for "addons-505406" took 9.770354156s
	I1218 23:27:03.331374 4010377 start.go:300] post-start starting for "addons-505406" (driver="docker")
	I1218 23:27:03.331383 4010377 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:27:03.331444 4010377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:27:03.331486 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.350193 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.455998 4010377 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:27:03.460402 4010377 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:27:03.460439 4010377 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:27:03.460452 4010377 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:27:03.460459 4010377 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:27:03.460473 4010377 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-4004447/.minikube/addons for local assets ...
	I1218 23:27:03.460544 4010377 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-4004447/.minikube/files for local assets ...
	I1218 23:27:03.460568 4010377 start.go:303] post-start completed in 129.188354ms
	I1218 23:27:03.460995 4010377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505406
	I1218 23:27:03.479570 4010377 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/config.json ...
	I1218 23:27:03.479856 4010377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:27:03.479913 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.498949 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.599109 4010377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:27:03.605029 4010377 start.go:128] duration metric: createHost completed in 10.046933623s
	I1218 23:27:03.605058 4010377 start.go:83] releasing machines lock for "addons-505406", held for 10.047081405s
	I1218 23:27:03.605136 4010377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505406
	I1218 23:27:03.623013 4010377 ssh_runner.go:195] Run: cat /version.json
	I1218 23:27:03.623082 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.623360 4010377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:27:03.623420 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.644113 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.644568 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.749786 4010377 ssh_runner.go:195] Run: systemctl --version
	I1218 23:27:03.889364 4010377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:27:03.895313 4010377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1218 23:27:03.926200 4010377 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:27:03.926341 4010377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:27:03.960210 4010377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:27:03.960285 4010377 start.go:475] detecting cgroup driver to use...
	I1218 23:27:03.960333 4010377 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:27:03.960414 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 23:27:03.975644 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 23:27:03.990220 4010377 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:27:03.990310 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:27:04.007963 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:27:04.025705 4010377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:27:04.126869 4010377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:27:04.230198 4010377 docker.go:219] disabling docker service ...
	I1218 23:27:04.230311 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:27:04.251627 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:27:04.266235 4010377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:27:04.366233 4010377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:27:04.465595 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:27:04.479754 4010377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:27:04.499523 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 23:27:04.511774 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 23:27:04.524043 4010377 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 23:27:04.524164 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 23:27:04.536144 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 23:27:04.548239 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 23:27:04.560652 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 23:27:04.573219 4010377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:27:04.584841 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 23:27:04.596662 4010377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:27:04.607006 4010377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:27:04.618976 4010377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:27:04.716224 4010377 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 23:27:04.863732 4010377 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 23:27:04.863874 4010377 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 23:27:04.868966 4010377 start.go:543] Will wait 60s for crictl version
	I1218 23:27:04.869054 4010377 ssh_runner.go:195] Run: which crictl
	I1218 23:27:04.873775 4010377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:27:04.918091 4010377 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1218 23:27:04.918182 4010377 ssh_runner.go:195] Run: containerd --version
	I1218 23:27:04.948293 4010377 ssh_runner.go:195] Run: containerd --version
	I1218 23:27:04.981532 4010377 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I1218 23:27:04.983544 4010377 cli_runner.go:164] Run: docker network inspect addons-505406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:27:05.004333 4010377 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 23:27:05.012704 4010377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:27:05.028403 4010377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:27:05.028486 4010377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:27:05.072679 4010377 containerd.go:604] all images are preloaded for containerd runtime.
	I1218 23:27:05.072712 4010377 containerd.go:518] Images already preloaded, skipping extraction
	I1218 23:27:05.072773 4010377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:27:05.120543 4010377 containerd.go:604] all images are preloaded for containerd runtime.
	I1218 23:27:05.120569 4010377 cache_images.go:84] Images are preloaded, skipping loading
	I1218 23:27:05.120641 4010377 ssh_runner.go:195] Run: sudo crictl info
	I1218 23:27:05.166401 4010377 cni.go:84] Creating CNI manager for ""
	I1218 23:27:05.166433 4010377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:27:05.166469 4010377 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:27:05.166494 4010377 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-505406 NodeName:addons-505406 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 23:27:05.166643 4010377 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-505406"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:27:05.166717 4010377 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-505406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:27:05.166794 4010377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 23:27:05.179463 4010377 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:27:05.179549 4010377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 23:27:05.191775 4010377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1218 23:27:05.213964 4010377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 23:27:05.236025 4010377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1218 23:27:05.257979 4010377 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:27:05.262618 4010377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:27:05.276496 4010377 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406 for IP: 192.168.49.2
	I1218 23:27:05.276532 4010377 certs.go:190] acquiring lock for shared ca certs: {Name:mk406b12e6a80d6e5757943ee55b3a3d6680c96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.277056 4010377 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key
	I1218 23:27:05.486030 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt ...
	I1218 23:27:05.486068 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt: {Name:mk0fb448f34fc36bba3ee3d1f11cdce25cc0aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.486723 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key ...
	I1218 23:27:05.486740 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key: {Name:mkb39ae66a6f7eae1fc2542e2fcbf85ec3cb4e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.486840 4010377 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key
	I1218 23:27:05.932805 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt ...
	I1218 23:27:05.932836 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt: {Name:mk8477f7cbd8fcd5d4657b7e1a7890f13d74f9a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.933448 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key ...
	I1218 23:27:05.933464 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key: {Name:mk9ba7df0fb5db06291706011b4208407cde640c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.933593 4010377 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.key
	I1218 23:27:05.933609 4010377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt with IP's: []
	I1218 23:27:06.771188 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt ...
	I1218 23:27:06.771220 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: {Name:mk18a7486c38f159230614dfdce1d43c34517f87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.771813 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.key ...
	I1218 23:27:06.771829 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.key: {Name:mk253971840ff54c5df7f7f76c6a1b6039ee2e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.771936 4010377 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2
	I1218 23:27:06.771963 4010377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 23:27:06.959970 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2 ...
	I1218 23:27:06.959999 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2: {Name:mkced843377e0a244fcc135d677912e5779f319b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.960191 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2 ...
	I1218 23:27:06.960207 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2: {Name:mk1247b4bf884a01409848889e4f75dce1a04f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.960746 4010377 certs.go:337] copying /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt
	I1218 23:27:06.960832 4010377 certs.go:341] copying /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key
	I1218 23:27:06.960912 4010377 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key
	I1218 23:27:06.960929 4010377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt with IP's: []
	I1218 23:27:07.264320 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt ...
	I1218 23:27:07.264350 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt: {Name:mka4a15fe590746f54c0c23809a71c18bb8a3577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:07.264542 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key ...
	I1218 23:27:07.264555 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key: {Name:mkc2db8121f13298e9e1f44a66f0c29b401aea67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:07.265146 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:27:07.265197 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem (1082 bytes)
	I1218 23:27:07.265226 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:27:07.265255 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem (1675 bytes)
	I1218 23:27:07.265852 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 23:27:07.297421 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 23:27:07.327843 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 23:27:07.357647 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 23:27:07.386631 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:27:07.415095 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 23:27:07.443089 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:27:07.470907 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 23:27:07.499771 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:27:07.529018 4010377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 23:27:07.550886 4010377 ssh_runner.go:195] Run: openssl version
	I1218 23:27:07.557878 4010377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:27:07.569789 4010377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:27:07.574488 4010377 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:27:07.574586 4010377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:27:07.583555 4010377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:27:07.595745 4010377 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:27:07.600203 4010377 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:27:07.600250 4010377 kubeadm.go:404] StartCluster: {Name:addons-505406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:27:07.600374 4010377 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 23:27:07.600453 4010377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:27:07.643559 4010377 cri.go:89] found id: ""
	I1218 23:27:07.643674 4010377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 23:27:07.654515 4010377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 23:27:07.665880 4010377 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 23:27:07.665974 4010377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 23:27:07.677579 4010377 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:27:07.677627 4010377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 23:27:07.730623 4010377 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 23:27:07.730942 4010377 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 23:27:07.779131 4010377 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:27:07.779255 4010377 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:27:07.779299 4010377 kubeadm.go:322] OS: Linux
	I1218 23:27:07.779354 4010377 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 23:27:07.779408 4010377 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 23:27:07.779461 4010377 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 23:27:07.779513 4010377 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 23:27:07.779566 4010377 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 23:27:07.779622 4010377 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 23:27:07.779672 4010377 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1218 23:27:07.779725 4010377 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1218 23:27:07.779776 4010377 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1218 23:27:07.864445 4010377 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:27:07.864592 4010377 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:27:07.864715 4010377 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:27:08.141238 4010377 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:27:08.145495 4010377 out.go:204]   - Generating certificates and keys ...
	I1218 23:27:08.145599 4010377 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 23:27:08.145683 4010377 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 23:27:08.581339 4010377 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:27:09.861125 4010377 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:27:10.221815 4010377 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 23:27:10.759553 4010377 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 23:27:11.054183 4010377 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 23:27:11.054498 4010377 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-505406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:27:11.293424 4010377 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 23:27:11.293791 4010377 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-505406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:27:11.595672 4010377 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:27:11.809329 4010377 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:27:12.161010 4010377 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 23:27:12.161284 4010377 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:27:12.422361 4010377 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:27:12.675402 4010377 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:27:14.017890 4010377 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:27:14.957991 4010377 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:27:14.958777 4010377 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:27:14.962831 4010377 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:27:14.965282 4010377 out.go:204]   - Booting up control plane ...
	I1218 23:27:14.965381 4010377 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:27:14.965455 4010377 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:27:14.965883 4010377 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:27:14.980907 4010377 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:27:14.981723 4010377 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:27:14.981954 4010377 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 23:27:15.108287 4010377 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:27:23.113211 4010377 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003218 seconds
	I1218 23:27:23.113327 4010377 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:27:23.134951 4010377 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:27:23.660518 4010377 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:27:23.661011 4010377 kubeadm.go:322] [mark-control-plane] Marking the node addons-505406 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 23:27:24.173381 4010377 kubeadm.go:322] [bootstrap-token] Using token: 2pck3j.iwjhkdhxathh9tdv
	I1218 23:27:24.175424 4010377 out.go:204]   - Configuring RBAC rules ...
	I1218 23:27:24.175549 4010377 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:27:24.181106 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:27:24.190732 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:27:24.194705 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:27:24.199601 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:27:24.205042 4010377 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:27:24.216843 4010377 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:27:24.458364 4010377 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 23:27:24.588212 4010377 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 23:27:24.589384 4010377 kubeadm.go:322] 
	I1218 23:27:24.589464 4010377 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 23:27:24.589478 4010377 kubeadm.go:322] 
	I1218 23:27:24.589551 4010377 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 23:27:24.589563 4010377 kubeadm.go:322] 
	I1218 23:27:24.589588 4010377 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 23:27:24.589648 4010377 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:27:24.589702 4010377 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:27:24.589711 4010377 kubeadm.go:322] 
	I1218 23:27:24.589770 4010377 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 23:27:24.589803 4010377 kubeadm.go:322] 
	I1218 23:27:24.589901 4010377 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 23:27:24.589926 4010377 kubeadm.go:322] 
	I1218 23:27:24.589976 4010377 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 23:27:24.590050 4010377 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:27:24.590118 4010377 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:27:24.590127 4010377 kubeadm.go:322] 
	I1218 23:27:24.590486 4010377 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:27:24.590626 4010377 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 23:27:24.590637 4010377 kubeadm.go:322] 
	I1218 23:27:24.590767 4010377 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2pck3j.iwjhkdhxathh9tdv \
	I1218 23:27:24.590894 4010377 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e312defcfa05e02bd60f7c592d29d4c5d570ecf2885804f11be3cfbfa6eee99b \
	I1218 23:27:24.590917 4010377 kubeadm.go:322] 	--control-plane 
	I1218 23:27:24.590922 4010377 kubeadm.go:322] 
	I1218 23:27:24.591009 4010377 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:27:24.591020 4010377 kubeadm.go:322] 
	I1218 23:27:24.591164 4010377 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2pck3j.iwjhkdhxathh9tdv \
	I1218 23:27:24.591316 4010377 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e312defcfa05e02bd60f7c592d29d4c5d570ecf2885804f11be3cfbfa6eee99b 
	I1218 23:27:24.595552 4010377 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:27:24.595663 4010377 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:27:24.595679 4010377 cni.go:84] Creating CNI manager for ""
	I1218 23:27:24.595687 4010377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:27:24.597875 4010377 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 23:27:24.599877 4010377 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:27:24.607018 4010377 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 23:27:24.607040 4010377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:27:24.638467 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:27:25.592864 4010377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 23:27:25.593031 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:25.593120 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=addons-505406 minikube.k8s.io/updated_at=2023_12_18T23_27_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:25.610811 4010377 ops.go:34] apiserver oom_adj: -16
	I1218 23:27:25.794925 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:26.295909 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:26.795050 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:27.295489 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:27.795454 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:28.295540 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:28.795709 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:29.295075 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:29.795566 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:30.295216 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:30.795968 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:31.295421 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:31.794981 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:32.295209 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:32.795094 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:33.295036 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:33.795544 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:34.295311 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:34.795454 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:35.295972 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:35.795250 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:36.295068 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:36.795020 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:36.940690 4010377 kubeadm.go:1088] duration metric: took 11.347713969s to wait for elevateKubeSystemPrivileges.
	I1218 23:27:36.940714 4010377 kubeadm.go:406] StartCluster complete in 29.340467938s
	I1218 23:27:36.940758 4010377 settings.go:142] acquiring lock: {Name:mkc0bc26fbf229b708fca267aea9769f0f259f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:36.941396 4010377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:27:36.941899 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/kubeconfig: {Name:mk056ad1e9e70ee26734d70551bb1d18ee8e2c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:36.942609 4010377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 23:27:36.942893 4010377 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:36.943066 4010377 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1218 23:27:36.943186 4010377 addons.go:69] Setting volumesnapshots=true in profile "addons-505406"
	I1218 23:27:36.943201 4010377 addons.go:231] Setting addon volumesnapshots=true in "addons-505406"
	I1218 23:27:36.943242 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.943728 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.944253 4010377 addons.go:69] Setting cloud-spanner=true in profile "addons-505406"
	I1218 23:27:36.944277 4010377 addons.go:231] Setting addon cloud-spanner=true in "addons-505406"
	I1218 23:27:36.944333 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.944814 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945886 4010377 addons.go:69] Setting metrics-server=true in profile "addons-505406"
	I1218 23:27:36.945987 4010377 addons.go:231] Setting addon metrics-server=true in "addons-505406"
	I1218 23:27:36.946048 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.946553 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.946954 4010377 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-505406"
	I1218 23:27:36.946976 4010377 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-505406"
	I1218 23:27:36.947025 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.947433 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945918 4010377 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-505406"
	I1218 23:27:36.967686 4010377 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-505406"
	I1218 23:27:36.967785 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.968351 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.969086 4010377 addons.go:69] Setting registry=true in profile "addons-505406"
	I1218 23:27:36.969117 4010377 addons.go:231] Setting addon registry=true in "addons-505406"
	I1218 23:27:36.969170 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.969703 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.996081 4010377 addons.go:69] Setting storage-provisioner=true in profile "addons-505406"
	I1218 23:27:36.996113 4010377 addons.go:231] Setting addon storage-provisioner=true in "addons-505406"
	I1218 23:27:36.996165 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.996614 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945931 4010377 addons.go:69] Setting default-storageclass=true in profile "addons-505406"
	I1218 23:27:37.005238 4010377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-505406"
	I1218 23:27:37.005647 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.017015 4010377 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-505406"
	I1218 23:27:37.017062 4010377 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-505406"
	I1218 23:27:37.017461 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945946 4010377 addons.go:69] Setting gcp-auth=true in profile "addons-505406"
	I1218 23:27:37.024814 4010377 mustload.go:65] Loading cluster: addons-505406
	I1218 23:27:36.945955 4010377 addons.go:69] Setting ingress=true in profile "addons-505406"
	I1218 23:27:36.945961 4010377 addons.go:69] Setting ingress-dns=true in profile "addons-505406"
	I1218 23:27:36.945972 4010377 addons.go:69] Setting inspektor-gadget=true in profile "addons-505406"
	I1218 23:27:37.044425 4010377 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1218 23:27:37.050498 4010377 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1218 23:27:37.050566 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1218 23:27:37.050676 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.069167 4010377 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:37.069625 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.087123 4010377 addons.go:231] Setting addon ingress=true in "addons-505406"
	I1218 23:27:37.087216 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.087698 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.120554 4010377 addons.go:231] Setting addon ingress-dns=true in "addons-505406"
	I1218 23:27:37.120666 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.125929 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.168521 4010377 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1218 23:27:37.188300 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1218 23:27:37.198677 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1218 23:27:37.198835 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.157534 4010377 addons.go:231] Setting addon inspektor-gadget=true in "addons-505406"
	I1218 23:27:37.199897 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.200461 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.219446 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1218 23:27:37.223237 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1218 23:27:37.223305 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1218 23:27:37.223410 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.254535 4010377 addons.go:231] Setting addon default-storageclass=true in "addons-505406"
	I1218 23:27:37.254577 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.255058 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.258586 4010377 out.go:177]   - Using image docker.io/registry:2.8.3
	I1218 23:27:37.266168 4010377 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1218 23:27:37.268172 4010377 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 23:27:37.268241 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1218 23:27:37.268346 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.284780 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1218 23:27:37.287749 4010377 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1218 23:27:37.287770 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1218 23:27:37.287838 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.285051 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1218 23:27:37.360782 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1218 23:27:37.365723 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1218 23:27:37.367851 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1218 23:27:37.372998 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1218 23:27:37.374835 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1218 23:27:37.376594 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1218 23:27:37.378785 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1218 23:27:37.375810 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.388948 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:27:37.391869 4010377 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:27:37.391885 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 23:27:37.391962 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.389142 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1218 23:27:37.397629 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1218 23:27:37.397737 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.382940 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.419539 4010377 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-505406"
	I1218 23:27:37.419577 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.420020 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.475040 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:27:37.481071 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:27:37.483051 4010377 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1218 23:27:37.490871 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1218 23:27:37.490909 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1218 23:27:37.490994 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.496965 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1218 23:27:37.502768 4010377 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 23:27:37.502805 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1218 23:27:37.502892 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.512346 4010377 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-505406" context rescaled to 1 replicas
	I1218 23:27:37.512405 4010377 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 23:27:37.521431 4010377 out.go:177] * Verifying Kubernetes components...
	I1218 23:27:37.525317 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:27:37.536128 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.546432 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1218 23:27:37.549405 4010377 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 23:27:37.549435 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1218 23:27:37.549526 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.597729 4010377 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 23:27:37.597751 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 23:27:37.597891 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.603779 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.617171 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.633099 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.696081 4010377 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1218 23:27:37.700183 4010377 out.go:177]   - Using image docker.io/busybox:stable
	I1218 23:27:37.696624 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.702835 4010377 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 23:27:37.703549 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1218 23:27:37.703647 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.751697 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.758154 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.801191 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.810147 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.828835 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.848605 4010377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 23:27:37.854466 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	W1218 23:27:37.855626 4010377 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1218 23:27:37.855682 4010377 retry.go:31] will retry after 212.486282ms: ssh: handshake failed: EOF
	I1218 23:27:38.012818 4010377 node_ready.go:35] waiting up to 6m0s for node "addons-505406" to be "Ready" ...
	I1218 23:27:38.018940 4010377 node_ready.go:49] node "addons-505406" has status "Ready":"True"
	I1218 23:27:38.018979 4010377 node_ready.go:38] duration metric: took 6.078168ms waiting for node "addons-505406" to be "Ready" ...
	I1218 23:27:38.018992 4010377 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:27:38.049703 4010377 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace to be "Ready" ...
	I1218 23:27:38.475647 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 23:27:38.522042 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1218 23:27:38.522076 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1218 23:27:38.550513 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 23:27:38.606326 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1218 23:27:38.606356 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1218 23:27:38.619325 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 23:27:38.630005 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1218 23:27:38.669075 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 23:27:38.689616 4010377 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1218 23:27:38.689650 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1218 23:27:38.690754 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1218 23:27:38.690774 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1218 23:27:38.715954 4010377 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1218 23:27:38.715981 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1218 23:27:38.839423 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1218 23:27:38.839451 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1218 23:27:38.879262 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:27:38.937136 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1218 23:27:38.937172 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1218 23:27:38.977026 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1218 23:27:38.977060 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1218 23:27:39.023664 4010377 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1218 23:27:39.023735 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1218 23:27:39.041843 4010377 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1218 23:27:39.041868 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1218 23:27:39.080016 4010377 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1218 23:27:39.080054 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1218 23:27:39.105237 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1218 23:27:39.105263 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1218 23:27:39.135881 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 23:27:39.147912 4010377 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:27:39.147947 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1218 23:27:39.170261 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 23:27:39.170293 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1218 23:27:39.292370 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1218 23:27:39.292406 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1218 23:27:39.305924 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:27:39.314005 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1218 23:27:39.321383 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1218 23:27:39.321411 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1218 23:27:39.392763 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 23:27:39.496691 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1218 23:27:39.496727 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1218 23:27:39.555986 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1218 23:27:39.556022 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1218 23:27:39.781539 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1218 23:27:39.781573 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1218 23:27:39.807632 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1218 23:27:39.807666 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1218 23:27:39.854732 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1218 23:27:39.854758 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1218 23:27:39.933268 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 23:27:39.933290 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1218 23:27:39.935028 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1218 23:27:39.935046 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1218 23:27:40.053414 4010377 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5d47t" not found
	I1218 23:27:40.053453 4010377 pod_ready.go:81] duration metric: took 2.003704039s waiting for pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace to be "Ready" ...
	E1218 23:27:40.053466 4010377 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5d47t" not found
	I1218 23:27:40.053474 4010377 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace to be "Ready" ...
	I1218 23:27:40.201187 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1218 23:27:40.201212 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1218 23:27:40.239561 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 23:27:40.243222 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1218 23:27:40.243246 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1218 23:27:40.523284 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1218 23:27:40.523308 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1218 23:27:40.802941 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 23:27:40.802978 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1218 23:27:40.859184 4010377 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.010530764s)
	I1218 23:27:40.859223 4010377 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 23:27:40.859274 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.383543388s)
	I1218 23:27:40.996717 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 23:27:42.076240 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:42.382078 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.831519411s)
	I1218 23:27:44.230488 4010377 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1218 23:27:44.230593 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:44.264991 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:44.563648 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:44.633526 4010377 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1218 23:27:44.738103 4010377 addons.go:231] Setting addon gcp-auth=true in "addons-505406"
	I1218 23:27:44.738203 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:44.738753 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:44.769206 4010377 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1218 23:27:44.769257 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:44.801102 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:44.902199 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.27215206s)
	I1218 23:27:44.902259 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.233161896s)
	I1218 23:27:44.902311 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.023007583s)
	I1218 23:27:44.902338 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.766432241s)
	I1218 23:27:44.902568 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596613208s)
	W1218 23:27:44.902590 4010377 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 23:27:44.902605 4010377 retry.go:31] will retry after 238.498404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 23:27:44.902635 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.588604215s)
	I1218 23:27:44.902644 4010377 addons.go:467] Verifying addon registry=true in "addons-505406"
	I1218 23:27:44.905453 4010377 out.go:177] * Verifying registry addon...
	I1218 23:27:44.903071 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.510276629s)
	I1218 23:27:44.903162 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.663567968s)
	I1218 23:27:44.903577 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.284217075s)
	I1218 23:27:44.907384 4010377 addons.go:467] Verifying addon ingress=true in "addons-505406"
	I1218 23:27:44.909551 4010377 out.go:177] * Verifying ingress addon...
	I1218 23:27:44.907494 4010377 addons.go:467] Verifying addon metrics-server=true in "addons-505406"
	I1218 23:27:44.908279 4010377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1218 23:27:44.912280 4010377 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1218 23:27:44.923171 4010377 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 23:27:44.924393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:44.924162 4010377 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1218 23:27:44.924467 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:45.141410 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:27:45.419677 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:45.421552 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:45.921555 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:45.931092 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:46.424103 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:46.424814 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:46.569163 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:46.628381 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.631591619s)
	I1218 23:27:46.628477 4010377 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-505406"
	I1218 23:27:46.631043 4010377 out.go:177] * Verifying csi-hostpath-driver addon...
	I1218 23:27:46.628715 4010377 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.859486779s)
	I1218 23:27:46.634674 4010377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1218 23:27:46.637411 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:27:46.639499 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1218 23:27:46.642155 4010377 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1218 23:27:46.642230 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1218 23:27:46.645503 4010377 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 23:27:46.645582 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:46.730970 4010377 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1218 23:27:46.731043 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1218 23:27:46.806198 4010377 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 23:27:46.806265 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1218 23:27:46.880199 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 23:27:46.919331 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:46.922116 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:47.143897 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:47.418412 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:47.420585 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:47.436371 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.294805029s)
	I1218 23:27:47.642170 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:47.924324 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:47.927953 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:48.013355 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.13306904s)
	I1218 23:27:48.017018 4010377 addons.go:467] Verifying addon gcp-auth=true in "addons-505406"
	I1218 23:27:48.021080 4010377 out.go:177] * Verifying gcp-auth addon...
	I1218 23:27:48.024258 4010377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1218 23:27:48.035888 4010377 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1218 23:27:48.035956 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:48.146985 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:48.418551 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:48.419746 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:48.528279 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:48.641370 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:48.917408 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:48.918369 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:49.028282 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:49.060562 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:49.141580 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:49.420301 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:49.421720 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:49.528284 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:49.641348 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:49.918380 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:49.919899 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:50.029149 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:50.141871 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:50.418938 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:50.419565 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:50.529043 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:50.641840 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:50.918711 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:50.924106 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:51.030325 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:51.062305 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:51.141004 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:51.420381 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:51.423611 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:51.530602 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:51.641326 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:51.919506 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:51.921135 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:52.030913 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:52.146887 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:52.419107 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:52.419617 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:52.528957 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:52.640549 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:52.918921 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:52.921464 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:53.028238 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:53.144708 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:53.419728 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:53.421584 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:53.527988 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:53.560844 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:53.641623 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:53.918514 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:53.919582 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:54.029315 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:54.146835 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:54.418216 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:54.418346 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:54.528074 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:54.642480 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:54.919058 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:54.920428 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:55.028333 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:55.143884 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:55.416731 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:55.417648 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:55.528222 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:55.640804 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:55.916398 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:55.917876 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:56.029039 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:56.060340 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:56.140964 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:56.420044 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:56.421804 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:56.528516 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:56.641038 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:56.921216 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:56.922294 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:57.028319 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:57.144445 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:57.423297 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:57.423467 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:57.527921 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:57.641340 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:57.917878 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:57.918234 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:58.028592 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:58.144985 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:58.417343 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:58.417582 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:58.527893 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:58.560309 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:58.641543 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:58.917246 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:58.919822 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:59.028575 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:59.140838 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:59.416415 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:59.416921 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:59.528507 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:59.642061 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:59.917321 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:59.917848 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:00.060270 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:00.188171 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:00.423766 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:00.425821 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:00.528854 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:00.560415 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:00.641147 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:00.918034 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:00.918830 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:01.028131 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:01.141289 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:01.417071 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:01.417350 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:01.528863 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:01.642124 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:01.917393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:01.918212 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:02.027913 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:02.141591 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:02.416582 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:02.417885 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:02.528585 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:02.560626 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:02.640250 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:02.916272 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:02.916803 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:03.028371 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:03.146658 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:03.418029 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:03.418220 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:03.528801 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:03.641626 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:03.916910 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:03.918684 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:04.028943 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:04.141304 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:04.417424 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:04.418289 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:04.527869 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:04.560773 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:04.641050 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:04.917241 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:04.919964 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:05.028584 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:05.151301 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:05.417621 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:05.418207 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:05.529278 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:05.640766 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:05.916766 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:05.918063 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:06.028930 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:06.147081 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:06.417777 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:06.419164 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:06.528835 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:06.641685 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:06.917204 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:06.918184 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:07.028180 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:07.060381 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:07.141209 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:07.416609 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:07.418039 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:07.528413 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:07.640155 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:07.917806 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:07.918716 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:08.028496 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:08.139939 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:08.416747 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:08.418935 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:08.528684 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:08.640409 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:08.916526 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:08.917443 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:09.028175 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:09.061048 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:09.143868 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:09.417393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:09.417526 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:09.527969 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:09.640681 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:09.919519 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:09.921483 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:10.028803 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:10.141341 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:10.416562 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:10.418131 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:10.528334 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:10.641220 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:10.916059 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:10.917275 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:11.028188 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:11.061198 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:11.140799 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:11.416663 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:11.418635 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:11.528370 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:11.641470 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:11.917825 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:11.918749 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:12.028670 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:12.147346 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:12.419427 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:12.420015 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:12.528298 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:12.642136 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:12.924857 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:12.926175 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:13.030603 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:13.061870 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:13.152709 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:13.423204 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:13.423466 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:13.529138 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:13.643726 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:13.922382 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:13.923804 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:14.029824 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:14.145393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:14.420605 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:14.422444 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:14.529282 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:14.643887 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:14.939208 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:14.941110 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:15.031313 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:15.070391 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:15.145706 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:15.417834 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:15.418750 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:15.528820 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:15.643698 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:15.920037 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:15.921052 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:16.028453 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:16.143488 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:16.417693 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:16.418436 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:16.528910 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:16.641559 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:16.922043 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:16.922652 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:17.029129 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:17.147337 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:17.417443 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:17.419901 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:17.529014 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:17.560348 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:17.642194 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:17.922512 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:17.923405 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:18.038443 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:18.146432 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:18.428907 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:18.429928 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:18.528620 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:18.560616 4010377 pod_ready.go:92] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.560644 4010377 pod_ready.go:81] duration metric: took 38.507160518s waiting for pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.560659 4010377 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.567454 4010377 pod_ready.go:92] pod "etcd-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.567480 4010377 pod_ready.go:81] duration metric: took 6.813737ms waiting for pod "etcd-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.567495 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.574360 4010377 pod_ready.go:92] pod "kube-apiserver-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.574386 4010377 pod_ready.go:81] duration metric: took 6.881601ms waiting for pod "kube-apiserver-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.574400 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.580936 4010377 pod_ready.go:92] pod "kube-controller-manager-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.580964 4010377 pod_ready.go:81] duration metric: took 6.55563ms waiting for pod "kube-controller-manager-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.580977 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w7pxw" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.588894 4010377 pod_ready.go:92] pod "kube-proxy-w7pxw" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.588920 4010377 pod_ready.go:81] duration metric: took 7.934935ms waiting for pod "kube-proxy-w7pxw" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.588933 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.640525 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:18.918205 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:18.919599 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:18.958545 4010377 pod_ready.go:92] pod "kube-scheduler-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.958619 4010377 pod_ready.go:81] duration metric: took 369.676303ms waiting for pod "kube-scheduler-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.958646 4010377 pod_ready.go:38] duration metric: took 40.939640488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:28:18.958693 4010377 api_server.go:52] waiting for apiserver process to appear ...
	I1218 23:28:18.958793 4010377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:28:18.978375 4010377 api_server.go:72] duration metric: took 41.465925889s to wait for apiserver process to appear ...
	I1218 23:28:18.978454 4010377 api_server.go:88] waiting for apiserver healthz status ...
	I1218 23:28:18.978487 4010377 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 23:28:18.988691 4010377 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 23:28:18.990295 4010377 api_server.go:141] control plane version: v1.28.4
	I1218 23:28:18.990322 4010377 api_server.go:131] duration metric: took 11.848195ms to wait for apiserver health ...
	I1218 23:28:18.990332 4010377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 23:28:19.029125 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:19.140488 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:19.165260 4010377 system_pods.go:59] 18 kube-system pods found
	I1218 23:28:19.165390 4010377 system_pods.go:61] "coredns-5dd5756b68-gz5tv" [e63b5341-2e55-47f0-b88e-dc22e0403e80] Running
	I1218 23:28:19.165416 4010377 system_pods.go:61] "csi-hostpath-attacher-0" [87e738f3-e48f-4316-8ed7-ccccd9114b41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1218 23:28:19.165456 4010377 system_pods.go:61] "csi-hostpath-resizer-0" [cb9a514e-9677-434d-b771-76f09efcd2f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1218 23:28:19.165487 4010377 system_pods.go:61] "csi-hostpathplugin-kwqtb" [2409be6f-8c39-4c22-b0d9-125994297ab2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 23:28:19.165509 4010377 system_pods.go:61] "etcd-addons-505406" [6034f836-b02e-4727-b166-5aa9fb36bbf4] Running
	I1218 23:28:19.165531 4010377 system_pods.go:61] "kindnet-ktkh2" [54313ed0-0489-48b5-93c3-351993a995c9] Running
	I1218 23:28:19.165562 4010377 system_pods.go:61] "kube-apiserver-addons-505406" [fc22a2d0-866e-4268-b6fe-1fb26e29631e] Running
	I1218 23:28:19.165587 4010377 system_pods.go:61] "kube-controller-manager-addons-505406" [8240dd25-f7d5-48d9-836a-fed1350af622] Running
	I1218 23:28:19.165616 4010377 system_pods.go:61] "kube-ingress-dns-minikube" [3a5f8190-8536-42e5-b817-a63d75a1d1b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 23:28:19.165641 4010377 system_pods.go:61] "kube-proxy-w7pxw" [9c0fe76b-5b4a-4787-8efb-4ec3fd477fa7] Running
	I1218 23:28:19.165673 4010377 system_pods.go:61] "kube-scheduler-addons-505406" [bb60524d-bc22-4d01-8eee-bf44e27d12d2] Running
	I1218 23:28:19.165703 4010377 system_pods.go:61] "metrics-server-7c66d45ddc-zjzgk" [d76dd58c-71b2-415f-b318-39b1117343c1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 23:28:19.165730 4010377 system_pods.go:61] "nvidia-device-plugin-daemonset-sr2zs" [5b296706-5778-44a8-a5fe-4eeaec480f20] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 23:28:19.165759 4010377 system_pods.go:61] "registry-proxy-x7nmz" [953b9840-520c-42b3-8b05-574b76391cd3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 23:28:19.165795 4010377 system_pods.go:61] "registry-wzr22" [24ef522d-90a4-4844-810a-182a22d8094c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 23:28:19.165820 4010377 system_pods.go:61] "snapshot-controller-58dbcc7b99-r8g6k" [ec8f8883-b1e3-4610-9fd1-e0eafac8e50a] Running
	I1218 23:28:19.165845 4010377 system_pods.go:61] "snapshot-controller-58dbcc7b99-z5h7b" [981d723e-25af-46f7-8a80-2251c7aad093] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 23:28:19.165878 4010377 system_pods.go:61] "storage-provisioner" [77221557-03fd-4a34-aa8b-c096521c83e3] Running
	I1218 23:28:19.165904 4010377 system_pods.go:74] duration metric: took 175.564322ms to wait for pod list to return data ...
	I1218 23:28:19.165928 4010377 default_sa.go:34] waiting for default service account to be created ...
	I1218 23:28:19.356973 4010377 default_sa.go:45] found service account: "default"
	I1218 23:28:19.356999 4010377 default_sa.go:55] duration metric: took 191.051211ms for default service account to be created ...
	I1218 23:28:19.357011 4010377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 23:28:19.416774 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:19.418741 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:19.528413 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:19.563887 4010377 system_pods.go:86] 18 kube-system pods found
	I1218 23:28:19.563920 4010377 system_pods.go:89] "coredns-5dd5756b68-gz5tv" [e63b5341-2e55-47f0-b88e-dc22e0403e80] Running
	I1218 23:28:19.563930 4010377 system_pods.go:89] "csi-hostpath-attacher-0" [87e738f3-e48f-4316-8ed7-ccccd9114b41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1218 23:28:19.563940 4010377 system_pods.go:89] "csi-hostpath-resizer-0" [cb9a514e-9677-434d-b771-76f09efcd2f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1218 23:28:19.563952 4010377 system_pods.go:89] "csi-hostpathplugin-kwqtb" [2409be6f-8c39-4c22-b0d9-125994297ab2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 23:28:19.563964 4010377 system_pods.go:89] "etcd-addons-505406" [6034f836-b02e-4727-b166-5aa9fb36bbf4] Running
	I1218 23:28:19.563974 4010377 system_pods.go:89] "kindnet-ktkh2" [54313ed0-0489-48b5-93c3-351993a995c9] Running
	I1218 23:28:19.563979 4010377 system_pods.go:89] "kube-apiserver-addons-505406" [fc22a2d0-866e-4268-b6fe-1fb26e29631e] Running
	I1218 23:28:19.563988 4010377 system_pods.go:89] "kube-controller-manager-addons-505406" [8240dd25-f7d5-48d9-836a-fed1350af622] Running
	I1218 23:28:19.563997 4010377 system_pods.go:89] "kube-ingress-dns-minikube" [3a5f8190-8536-42e5-b817-a63d75a1d1b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 23:28:19.564003 4010377 system_pods.go:89] "kube-proxy-w7pxw" [9c0fe76b-5b4a-4787-8efb-4ec3fd477fa7] Running
	I1218 23:28:19.564014 4010377 system_pods.go:89] "kube-scheduler-addons-505406" [bb60524d-bc22-4d01-8eee-bf44e27d12d2] Running
	I1218 23:28:19.564022 4010377 system_pods.go:89] "metrics-server-7c66d45ddc-zjzgk" [d76dd58c-71b2-415f-b318-39b1117343c1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 23:28:19.564032 4010377 system_pods.go:89] "nvidia-device-plugin-daemonset-sr2zs" [5b296706-5778-44a8-a5fe-4eeaec480f20] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 23:28:19.564040 4010377 system_pods.go:89] "registry-proxy-x7nmz" [953b9840-520c-42b3-8b05-574b76391cd3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 23:28:19.564047 4010377 system_pods.go:89] "registry-wzr22" [24ef522d-90a4-4844-810a-182a22d8094c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 23:28:19.564054 4010377 system_pods.go:89] "snapshot-controller-58dbcc7b99-r8g6k" [ec8f8883-b1e3-4610-9fd1-e0eafac8e50a] Running
	I1218 23:28:19.564062 4010377 system_pods.go:89] "snapshot-controller-58dbcc7b99-z5h7b" [981d723e-25af-46f7-8a80-2251c7aad093] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 23:28:19.564072 4010377 system_pods.go:89] "storage-provisioner" [77221557-03fd-4a34-aa8b-c096521c83e3] Running
	I1218 23:28:19.564079 4010377 system_pods.go:126] duration metric: took 207.06165ms to wait for k8s-apps to be running ...
	I1218 23:28:19.564090 4010377 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:28:19.564147 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:28:19.579290 4010377 system_svc.go:56] duration metric: took 15.192163ms WaitForService to wait for kubelet.
	I1218 23:28:19.579317 4010377 kubeadm.go:581] duration metric: took 42.066873795s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:28:19.579337 4010377 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:28:19.661941 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:19.757961 4010377 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:28:19.758043 4010377 node_conditions.go:123] node cpu capacity is 2
	I1218 23:28:19.758083 4010377 node_conditions.go:105] duration metric: took 178.739307ms to run NodePressure ...
	I1218 23:28:19.758115 4010377 start.go:228] waiting for startup goroutines ...
	I1218 23:28:19.925551 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:19.926434 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:20.028747 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:20.141178 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:20.419471 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:20.420212 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:20.527869 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:20.641293 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:20.916308 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:20.918392 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:21.029887 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:21.146689 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:21.417290 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:21.418463 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:21.528667 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:21.641269 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:21.917506 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:21.918362 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:22.028440 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:22.148032 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:22.417500 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:22.419890 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:22.528615 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:22.641444 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:22.917953 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:22.919776 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:23.028678 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:23.150182 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:23.418621 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:23.420761 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:23.529705 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:23.641829 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:23.917836 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:23.917907 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:24.035435 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:24.146386 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:24.419908 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:24.421179 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:24.528147 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:24.641948 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:24.917381 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:24.918978 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:25.030203 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:25.145261 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:25.419632 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:25.422431 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:25.529304 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:25.642261 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:25.919290 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:25.920205 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:26.028685 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:26.143487 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:26.419750 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:26.420713 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:26.528295 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:26.640924 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:26.916838 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:26.918324 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:27.028575 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:27.140826 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:27.417218 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:27.418109 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:27.528692 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:27.640038 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:27.917107 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:27.918879 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:28.029456 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:28.142593 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:28.418670 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:28.420047 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:28.529071 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:28.641364 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:28.918183 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:28.919292 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:29.028172 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:29.141279 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:29.418753 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:29.420543 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:29.528536 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:29.644060 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:29.919221 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:29.920139 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:30.032264 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:30.151059 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:30.417829 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:30.418652 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:30.529447 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:30.641023 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:30.917460 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:30.918325 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:31.028012 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:31.146041 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:31.417361 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:31.417737 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:31.528516 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:31.641217 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:31.916770 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:31.917932 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:32.028614 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:32.141537 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:32.415965 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:32.417106 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:32.528622 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:32.640591 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:32.917691 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:32.918013 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:33.028554 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:33.146481 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:33.419059 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:33.419615 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:33.528947 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:33.640765 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:33.917351 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:33.918400 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:34.028341 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:34.143527 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:34.416714 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:34.418742 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:34.529297 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:34.641151 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:34.920011 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:34.920933 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:35.029182 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:35.147388 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:35.418337 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:35.419190 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:35.528119 4010377 kapi.go:107] duration metric: took 47.503859123s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1218 23:28:35.534772 4010377 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-505406 cluster.
	I1218 23:28:35.536840 4010377 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1218 23:28:35.538585 4010377 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1218 23:28:35.640183 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:35.917293 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:35.918368 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:36.144239 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:36.423560 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:36.425011 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:36.641768 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:36.919091 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:36.920691 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:37.141297 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:37.419043 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:37.421412 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:37.641728 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:37.919608 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:37.920706 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:38.146008 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:38.420037 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:38.421074 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:38.641366 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:38.920681 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:38.921700 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:39.143924 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:39.418713 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:39.419219 4010377 kapi.go:107] duration metric: took 54.510939992s to wait for kubernetes.io/minikube-addons=registry ...
	I1218 23:28:39.640723 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:39.918452 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:40.143117 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:40.416664 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:40.641038 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:40.923054 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:41.142838 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:41.418375 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:41.641395 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:41.917112 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:42.142886 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:42.417178 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:42.640984 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:42.924238 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:43.141547 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:43.416902 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:43.641986 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:43.920934 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:44.140517 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:44.417449 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:44.641097 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:44.917399 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:45.151532 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:45.417978 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:45.641417 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:45.916808 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:46.141248 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:46.417244 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:46.641044 4010377 kapi.go:107] duration metric: took 1m0.006367208s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1218 23:28:46.917504 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:47.417245 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:47.916452 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:48.417287 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:48.917347 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:49.417109 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:49.917002 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:50.416505 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:50.917615 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:51.417468 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:51.916611 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:52.417790 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:52.922228 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:53.416709 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:53.918275 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:54.416710 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:54.917305 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:55.438920 4010377 kapi.go:107] duration metric: took 1m10.526638521s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1218 23:28:55.441937 4010377 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, gcp-auth, registry, csi-hostpath-driver, ingress
	I1218 23:28:55.444445 4010377 addons.go:502] enable addons completed in 1m18.501326331s: enabled=[nvidia-device-plugin storage-provisioner-rancher cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server default-storageclass volumesnapshots gcp-auth registry csi-hostpath-driver ingress]
	I1218 23:28:55.444563 4010377 start.go:233] waiting for cluster config update ...
	I1218 23:28:55.444624 4010377 start.go:242] writing updated cluster config ...
	I1218 23:28:55.445416 4010377 ssh_runner.go:195] Run: rm -f paused
	I1218 23:28:55.797115 4010377 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 23:28:55.801349 4010377 out.go:177] * Done! kubectl is now configured to use "addons-505406" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	4f46fe23b52fb       dd1b12fcb6097       8 seconds ago        Exited              hello-world-app                          2                   197b9f76b8cb0       hello-world-app-5d77478584-tt9xh
	591728da1e65e       f09fc93534f6a       33 seconds ago       Running             nginx                                    0                   8410070f74e50       nginx
	3f36a2465d9bf       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	8b06dca5aa5ef       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	880dfea0fb2e3       922312104da8a       About a minute ago   Running             liveness-probe                           0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	58e7b38faf296       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	92d3f49d8c601       2a5f29343eb03       About a minute ago   Running             gcp-auth                                 0                   453cb95cb5e3c       gcp-auth-d4c87556c-qvvtm
	ef395bc49bb60       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   8164f33da46b4       local-path-provisioner-78b46b4d5c-7w9vs
	13ce277a8e71c       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	4294ca04a895f       a8df1f5260cb4       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   1e09da3ac2ec3       nvidia-device-plugin-daemonset-sr2zs
	534aae4930e00       f7be1a5e72885       About a minute ago   Running             cloud-spanner-emulator                   0                   fa6cb6b4355d4       cloud-spanner-emulator-5649c69bf6-j6mj7
	1f95b97d17df8       af594c6a879f2       About a minute ago   Exited              patch                                    0                   9221e3de7fe0b       ingress-nginx-admission-patch-7mb7n
	af2154afe0636       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   a3ea3a21f56a1       csi-hostpath-attacher-0
	a4ab8d7cb7a8b       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   67782eadf843c       csi-hostpath-resizer-0
	a12797d4d7fc0       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	c9dd70f36c4eb       97e04611ad434       About a minute ago   Running             coredns                                  0                   9816b1470eba3       coredns-5dd5756b68-gz5tv
	6a63806b689b1       af594c6a879f2       About a minute ago   Exited              create                                   0                   ee9696273f635       ingress-nginx-admission-create-j9q4d
	ce090beec9f33       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   d67f9ace944b1       storage-provisioner
	ed6d8cd967f07       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                              0                   fd4daecb30762       kindnet-ktkh2
	20a4491161c86       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   fda768cae677f       kube-proxy-w7pxw
	cc31a5ffc3f26       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   b5734a513bd61       kube-apiserver-addons-505406
	111b78b6df78f       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   8fef85fbc4d9c       etcd-addons-505406
	c28d2856689f9       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   31e9c8039dfa0       kube-scheduler-addons-505406
	80dcec64bc94c       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   241a1d47cc166       kube-controller-manager-addons-505406
	
	* 
	* ==> containerd <==
	* Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.278828812Z" level=info msg="TearDown network for sandbox \"1648aa7d62a31dfea7cbc6edb620d9363562b2753399a33739b3d0c96d56f551\" successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.278890686Z" level=info msg="StopPodSandbox for \"1648aa7d62a31dfea7cbc6edb620d9363562b2753399a33739b3d0c96d56f551\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.717489998Z" level=info msg="RemoveContainer for \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\""
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.737086693Z" level=info msg="RemoveContainer for \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.744299294Z" level=error msg="ContainerStatus for \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": not found"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.749549005Z" level=info msg="RemoveContainer for \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\""
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.769588321Z" level=info msg="RemoveContainer for \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.770437456Z" level=error msg="ContainerStatus for \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": not found"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.667170466Z" level=info msg="Kill container \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\""
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.746117789Z" level=info msg="shim disconnected" id=018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.746634331Z" level=warning msg="cleaning up after shim disconnected" id=018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae namespace=k8s.io
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.746711778Z" level=info msg="cleaning up dead shim"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.762655108Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:30:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9332 runtime=io.containerd.runc.v2\n"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.767555184Z" level=info msg="StopContainer for \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\" returns successfully"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.768531817Z" level=info msg="StopPodSandbox for \"469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9\""
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.768685243Z" level=info msg="Container to stop \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.841144547Z" level=info msg="shim disconnected" id=469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.841444787Z" level=warning msg="cleaning up after shim disconnected" id=469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9 namespace=k8s.io
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.841555588Z" level=info msg="cleaning up dead shim"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.856198594Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:30:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9372 runtime=io.containerd.runc.v2\n"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.927412007Z" level=info msg="TearDown network for sandbox \"469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9\" successfully"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.927540309Z" level=info msg="StopPodSandbox for \"469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9\" returns successfully"
	Dec 18 23:30:01 addons-505406 containerd[744]: time="2023-12-18T23:30:01.764654220Z" level=info msg="RemoveContainer for \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\""
	Dec 18 23:30:01 addons-505406 containerd[744]: time="2023-12-18T23:30:01.772607912Z" level=info msg="RemoveContainer for \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\" returns successfully"
	Dec 18 23:30:01 addons-505406 containerd[744]: time="2023-12-18T23:30:01.773397437Z" level=error msg="ContainerStatus for \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\": not found"
	
	* 
	* ==> coredns [c9dd70f36c4eb293bf4eade6dd5572f67e303fc0d0c67d4be56ace1c5e8f1022] <==
	* [INFO] 10.244.0.19:37892 - 34743 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000283961s
	[INFO] 10.244.0.19:37892 - 19875 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054301s
	[INFO] 10.244.0.19:37892 - 46558 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052767s
	[INFO] 10.244.0.19:41787 - 14547 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000245496s
	[INFO] 10.244.0.19:37892 - 16953 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001215917s
	[INFO] 10.244.0.19:37892 - 9212 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001089593s
	[INFO] 10.244.0.19:37892 - 34763 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075675s
	[INFO] 10.244.0.19:45340 - 59043 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000125628s
	[INFO] 10.244.0.19:45340 - 29604 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109456s
	[INFO] 10.244.0.19:51837 - 42420 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060578s
	[INFO] 10.244.0.19:51837 - 62240 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000072886s
	[INFO] 10.244.0.19:51837 - 6385 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000104467s
	[INFO] 10.244.0.19:45340 - 32328 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000160738s
	[INFO] 10.244.0.19:45340 - 26654 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073419s
	[INFO] 10.244.0.19:51837 - 43712 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035872s
	[INFO] 10.244.0.19:51837 - 44135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073279s
	[INFO] 10.244.0.19:45340 - 39191 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037341s
	[INFO] 10.244.0.19:51837 - 51590 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037826s
	[INFO] 10.244.0.19:45340 - 53668 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042199s
	[INFO] 10.244.0.19:45340 - 23407 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006382757s
	[INFO] 10.244.0.19:51837 - 58671 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006560816s
	[INFO] 10.244.0.19:51837 - 40689 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297968s
	[INFO] 10.244.0.19:45340 - 34305 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001447202s
	[INFO] 10.244.0.19:51837 - 46219 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062416s
	[INFO] 10.244.0.19:45340 - 18090 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048049s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-505406
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-505406
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=addons-505406
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T23_27_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-505406
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-505406"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:27:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-505406
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:29:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-505406
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bdf59c5d75f4f8bb2d2e90b60e7fd8e
	  System UUID:                102d1fd6-2ff2-4b64-8ff3-ed26f256c4f7
	  Boot ID:                    890256b0-dbd9-440c-9da4-c1f4e1d4cc44
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-j6mj7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  default                     hello-world-app-5d77478584-tt9xh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-qvvtm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 coredns-5dd5756b68-gz5tv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m29s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 csi-hostpathplugin-kwqtb                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 etcd-addons-505406                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m42s
	  kube-system                 kindnet-ktkh2                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m29s
	  kube-system                 kube-apiserver-addons-505406               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-controller-manager-addons-505406      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-proxy-w7pxw                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-addons-505406               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 nvidia-device-plugin-daemonset-sr2zs       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  local-path-storage          local-path-provisioner-78b46b4d5c-7w9vs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m27s  kube-proxy       
	  Normal  Starting                 2m42s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m42s  kubelet          Node addons-505406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s  kubelet          Node addons-505406 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s  kubelet          Node addons-505406 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m42s  kubelet          Node addons-505406 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m42s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m32s  kubelet          Node addons-505406 status is now: NodeReady
	  Normal  RegisteredNode           2m30s  node-controller  Node addons-505406 event: Registered Node addons-505406 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001125] FS-Cache: O-key=[8] '246e5c0100000000'
	[  +0.000829] FS-Cache: N-cookie c=0000023a [p=00000231 fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=0000000075b7cdeb
	[  +0.001111] FS-Cache: N-key=[8] '246e5c0100000000'
	[  +0.003808] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=00000234 [p=00000231 fl=226 nc=0 na=1]
	[  +0.001070] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=0000000030d65e10
	[  +0.001118] FS-Cache: O-key=[8] '246e5c0100000000'
	[  +0.000761] FS-Cache: N-cookie c=0000023b [p=00000231 fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000f67add3f
	[  +0.001180] FS-Cache: N-key=[8] '246e5c0100000000'
	[  +2.759454] FS-Cache: Duplicate cookie detected
	[  +0.000817] FS-Cache: O-cookie c=00000232 [p=00000231 fl=226 nc=0 na=1]
	[  +0.001047] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=000000000462ecb2
	[  +0.001127] FS-Cache: O-key=[8] '236e5c0100000000'
	[  +0.000764] FS-Cache: N-cookie c=0000023d [p=00000231 fl=2 nc=0 na=1]
	[  +0.001132] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000fe66079f
	[  +0.001124] FS-Cache: N-key=[8] '236e5c0100000000'
	[  +0.425127] FS-Cache: Duplicate cookie detected
	[  +0.000854] FS-Cache: O-cookie c=00000237 [p=00000231 fl=226 nc=0 na=1]
	[  +0.001164] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=0000000072235349
	[  +0.001236] FS-Cache: O-key=[8] '296e5c0100000000'
	[  +0.000804] FS-Cache: N-cookie c=0000023e [p=00000231 fl=2 nc=0 na=1]
	[  +0.001165] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000d61c5791
	[  +0.001214] FS-Cache: N-key=[8] '296e5c0100000000'
	
	* 
	* ==> etcd [111b78b6df78f7ec01d94768a4f407a45b63b80becff52634ea0adf94d8d9d54] <==
	* {"level":"info","ts":"2023-12-18T23:27:16.86739Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:27:16.867414Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:27:16.867423Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:27:16.867911Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-18T23:27:16.867926Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-18T23:27:16.871087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-18T23:27:16.871218Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-18T23:27:17.348929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-18T23:27:17.349037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-18T23:27:17.349086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-18T23:27:17.349158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.349192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.349235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.349276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.353086Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-505406 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T23:27:17.353221Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.353301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T23:27:17.36155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-18T23:27:17.364973Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.368981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.369149Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.353311Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T23:27:17.370353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-18T23:27:17.404913Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T23:27:17.40513Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [92d3f49d8c601e145cf0e168296af02725d3a93573812ef204b5b4a7156ccad7] <==
	* 2023/12/18 23:28:35 GCP Auth Webhook started!
	2023/12/18 23:29:07 Ready to marshal response ...
	2023/12/18 23:29:07 Ready to write response ...
	2023/12/18 23:29:18 Ready to marshal response ...
	2023/12/18 23:29:18 Ready to write response ...
	2023/12/18 23:29:30 Ready to marshal response ...
	2023/12/18 23:29:30 Ready to write response ...
	2023/12/18 23:29:39 Ready to marshal response ...
	2023/12/18 23:29:39 Ready to write response ...
	2023/12/18 23:29:48 Ready to marshal response ...
	2023/12/18 23:29:48 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:30:06 up 2 days,  7:12,  0 users,  load average: 2.99, 2.73, 2.62
	Linux addons-505406 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [ed6d8cd967f0788ecabbfc41d2adba3d1ff1687ade20dae72c72314d78adfc7a] <==
	* I1218 23:28:08.835461       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1218 23:28:08.849433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:08.849464       1 main.go:227] handling current node
	I1218 23:28:18.864407       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:18.864434       1 main.go:227] handling current node
	I1218 23:28:28.876442       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:28.876468       1 main.go:227] handling current node
	I1218 23:28:38.880726       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:38.880754       1 main.go:227] handling current node
	I1218 23:28:48.887705       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:48.887738       1 main.go:227] handling current node
	I1218 23:28:58.895601       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:58.895633       1 main.go:227] handling current node
	I1218 23:29:08.899916       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:08.899944       1 main.go:227] handling current node
	I1218 23:29:18.912616       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:18.912641       1 main.go:227] handling current node
	I1218 23:29:28.925226       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:28.925255       1 main.go:227] handling current node
	I1218 23:29:38.929882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:38.929913       1 main.go:227] handling current node
	I1218 23:29:48.940926       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:48.940955       1 main.go:227] handling current node
	I1218 23:29:58.956484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:58.956520       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [cc31a5ffc3f26172d2d2c55c47d28afe9d74094af496c29e4be838e78246b10a] <==
	* I1218 23:29:24.401020       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W1218 23:29:25.320083       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1218 23:29:28.523926       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1218 23:29:29.983163       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1218 23:29:30.363464       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.55.182"}
	I1218 23:29:40.293578       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.192.223"}
	I1218 23:29:58.821753       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.821794       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.837077       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.837129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.847093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.847499       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.864132       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.865818       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.883681       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.883735       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.886235       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.886293       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.905902       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.906823       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.914978       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.915022       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1218 23:29:59.847211       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1218 23:29:59.916233       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1218 23:29:59.933006       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [80dcec64bc94c4be300129c5f8f1df06353f3aa7b5ca03eafd1d8bc55821b494] <==
	* I1218 23:29:47.659668       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 23:29:57.633241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.792µs"
	I1218 23:29:57.641998       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1218 23:29:57.688677       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1218 23:29:58.709224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.084µs"
	I1218 23:29:58.961198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.047µs"
	E1218 23:29:59.849297       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:29:59.922118       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:29:59.935487       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:00.852650       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:00.852685       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:01.169971       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:01.170016       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:01.279134       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:01.279172       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:01.744156       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:01.744190       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:03.571082       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:03.571118       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:03.804608       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:03.804643       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:03.955346       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:03.955386       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 23:30:06.392202       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1218 23:30:06.392243       1 shared_informer.go:318] Caches are synced for resource quota
	
	* 
	* ==> kube-proxy [20a4491161c86aaa7542b651c6cf9ac91f2212222ba854f55cdb2a7528c7d1f3] <==
	* I1218 23:27:38.418683       1 server_others.go:69] "Using iptables proxy"
	I1218 23:27:38.451076       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1218 23:27:38.541756       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1218 23:27:38.544192       1 server_others.go:152] "Using iptables Proxier"
	I1218 23:27:38.544232       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1218 23:27:38.544242       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1218 23:27:38.544302       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 23:27:38.544539       1 server.go:846] "Version info" version="v1.28.4"
	I1218 23:27:38.544555       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 23:27:38.546005       1 config.go:188] "Starting service config controller"
	I1218 23:27:38.546057       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 23:27:38.546109       1 config.go:97] "Starting endpoint slice config controller"
	I1218 23:27:38.546115       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 23:27:38.548305       1 config.go:315] "Starting node config controller"
	I1218 23:27:38.548330       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 23:27:38.647729       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 23:27:38.647780       1 shared_informer.go:318] Caches are synced for service config
	I1218 23:27:38.649726       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c28d2856689f9f3f41a3169203f6ffaa98b4c51d37101e318368fbcb2c57cd8a] <==
	* W1218 23:27:21.946587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 23:27:21.946871       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.947039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 23:27:21.947190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 23:27:21.947376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 23:27:21.947485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 23:27:21.947724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 23:27:21.947861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 23:27:21.948063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.948179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 23:27:21.948213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1218 23:27:21.948267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 23:27:21.948563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 23:27:21.948951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1218 23:27:21.948985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.949124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 23:27:21.949145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.949192       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 23:27:21.949211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1218 23:27:21.949279       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 23:27:21.949296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 23:27:21.948196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.949775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 23:27:21.949801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1218 23:27:23.335795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.368271    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981d723e-25af-46f7-8a80-2251c7aad093-kube-api-access-vklzj" (OuterVolumeSpecName: "kube-api-access-vklzj") pod "981d723e-25af-46f7-8a80-2251c7aad093" (UID: "981d723e-25af-46f7-8a80-2251c7aad093"). InnerVolumeSpecName "kube-api-access-vklzj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.369980    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a-kube-api-access-r8cnd" (OuterVolumeSpecName: "kube-api-access-r8cnd") pod "ec8f8883-b1e3-4610-9fd1-e0eafac8e50a" (UID: "ec8f8883-b1e3-4610-9fd1-e0eafac8e50a"). InnerVolumeSpecName "kube-api-access-r8cnd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.467144    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vklzj\" (UniqueName: \"kubernetes.io/projected/981d723e-25af-46f7-8a80-2251c7aad093-kube-api-access-vklzj\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.467200    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r8cnd\" (UniqueName: \"kubernetes.io/projected/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a-kube-api-access-r8cnd\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.699733    1339 scope.go:117] "RemoveContainer" containerID="fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.743275    1339 scope.go:117] "RemoveContainer" containerID="fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: E1218 23:29:59.744723    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": not found" containerID="fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.744773    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"} err="failed to get container status \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": not found"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.748034    1339 scope.go:117] "RemoveContainer" containerID="4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.770087    1339 scope.go:117] "RemoveContainer" containerID="4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: E1218 23:29:59.771667    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": not found" containerID="4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.771773    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"} err="failed to get container status \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": not found"
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.607119    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="981d723e-25af-46f7-8a80-2251c7aad093" path="/var/lib/kubelet/pods/981d723e-25af-46f7-8a80-2251c7aad093/volumes"
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.607604    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec8f8883-b1e3-4610-9fd1-e0eafac8e50a" path="/var/lib/kubelet/pods/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a/volumes"
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.992172    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwwd\" (UniqueName: \"kubernetes.io/projected/cad694e6-d708-4710-b8d9-61731db55c47-kube-api-access-zbwwd\") pod \"cad694e6-d708-4710-b8d9-61731db55c47\" (UID: \"cad694e6-d708-4710-b8d9-61731db55c47\") "
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.992733    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cad694e6-d708-4710-b8d9-61731db55c47-webhook-cert\") pod \"cad694e6-d708-4710-b8d9-61731db55c47\" (UID: \"cad694e6-d708-4710-b8d9-61731db55c47\") "
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.995313    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad694e6-d708-4710-b8d9-61731db55c47-kube-api-access-zbwwd" (OuterVolumeSpecName: "kube-api-access-zbwwd") pod "cad694e6-d708-4710-b8d9-61731db55c47" (UID: "cad694e6-d708-4710-b8d9-61731db55c47"). InnerVolumeSpecName "kube-api-access-zbwwd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.001046    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad694e6-d708-4710-b8d9-61731db55c47-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cad694e6-d708-4710-b8d9-61731db55c47" (UID: "cad694e6-d708-4710-b8d9-61731db55c47"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.093180    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zbwwd\" (UniqueName: \"kubernetes.io/projected/cad694e6-d708-4710-b8d9-61731db55c47-kube-api-access-zbwwd\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.093228    1339 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cad694e6-d708-4710-b8d9-61731db55c47-webhook-cert\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.762275    1339 scope.go:117] "RemoveContainer" containerID="018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae"
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.773003    1339 scope.go:117] "RemoveContainer" containerID="018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae"
	Dec 18 23:30:01 addons-505406 kubelet[1339]: E1218 23:30:01.773672    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\": not found" containerID="018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae"
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.773731    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae"} err="failed to get container status \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\": not found"
	Dec 18 23:30:02 addons-505406 kubelet[1339]: I1218 23:30:02.605865    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cad694e6-d708-4710-b8d9-61731db55c47" path="/var/lib/kubelet/pods/cad694e6-d708-4710-b8d9-61731db55c47/volumes"
	
	* 
	* ==> storage-provisioner [ce090beec9f33d032454c060633b807f4f48e527381868698dd659568876a342] <==
	* I1218 23:27:44.119851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 23:27:44.170096       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 23:27:44.170169       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 23:27:44.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 23:27:44.183291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-505406_af493d41-53b7-4610-adef-d54045f0af0b!
	I1218 23:27:44.193557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b38b873d-8aa0-4633-a415-96c7f51855eb", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-505406_af493d41-53b7-4610-adef-d54045f0af0b became leader
	I1218 23:27:44.286316       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-505406_af493d41-53b7-4610-adef-d54045f0af0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-505406 -n addons-505406
helpers_test.go:261: (dbg) Run:  kubectl --context addons-505406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 41.341993ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-505406 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-505406 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [75c1c8c5-8936-4fe7-a1e4-c8873eeb44c1] Pending
helpers_test.go:344: "task-pv-pod" [75c1c8c5-8936-4fe7-a1e4-c8873eeb44c1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [75c1c8c5-8936-4fe7-a1e4-c8873eeb44c1] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003867275s
addons_test.go:583: (dbg) Run:  kubectl --context addons-505406 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-505406 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-505406 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-505406 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-505406 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-505406 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-505406 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2f4e1561-bde2-4eb8-a7bc-19b2137aa8d1] Pending
helpers_test.go:344: "task-pv-pod-restore" [2f4e1561-bde2-4eb8-a7bc-19b2137aa8d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2f4e1561-bde2-4eb8-a7bc-19b2137aa8d1] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008701825s
addons_test.go:625: (dbg) Run:  kubectl --context addons-505406 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-505406 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-505406 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-505406 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (801.622707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:29:57.535586 4020685 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:29:57.536760 4020685 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:29:57.536776 4020685 out.go:309] Setting ErrFile to fd 2...
	I1218 23:29:57.536782 4020685 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:29:57.537080 4020685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:29:57.537454 4020685 mustload.go:65] Loading cluster: addons-505406
	I1218 23:29:57.537886 4020685 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:29:57.537911 4020685 addons.go:594] checking whether the cluster is paused
	I1218 23:29:57.538035 4020685 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:29:57.538061 4020685 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:29:57.538658 4020685 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:29:57.575420 4020685 ssh_runner.go:195] Run: systemctl --version
	I1218 23:29:57.575484 4020685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:29:57.601253 4020685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:29:57.759516 4020685 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 23:29:57.759610 4020685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:29:57.816435 4020685 cri.go:89] found id: "3f36a2465d9bf4da58f377181b0ed92c726b4d8454734f7dc99f7df391039929"
	I1218 23:29:57.816458 4020685 cri.go:89] found id: "8b06dca5aa5efc955fb88d34351246c952608fb9dedd30d5ac63b05fee00b778"
	I1218 23:29:57.816463 4020685 cri.go:89] found id: "880dfea0fb2e313665d7331f5679ceacd284ff89178477b9a6792e2017cc02a5"
	I1218 23:29:57.816468 4020685 cri.go:89] found id: "58e7b38faf29621e52dbe4462d5215a540753f976eacf002d5851d77e840be6f"
	I1218 23:29:57.816473 4020685 cri.go:89] found id: "13ce277a8e71cdafc801ac7666b5b763a5b242a97ef5fc1f2f25b4be0959406b"
	I1218 23:29:57.816478 4020685 cri.go:89] found id: "4294ca04a895fed71c9f16661e16c0dd3565f2bebff1b78a315371ebdd29c270"
	I1218 23:29:57.816482 4020685 cri.go:89] found id: "af2154afe06360e1d2caee5d682510daffec7862a541806e1f989bf2a0ec5d37"
	I1218 23:29:57.816486 4020685 cri.go:89] found id: "fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	I1218 23:29:57.816490 4020685 cri.go:89] found id: "a4ab8d7cb7a8b2abe09395a68bd1632d3ba2468aba16620c2a81c657d5cc8e61"
	I1218 23:29:57.816496 4020685 cri.go:89] found id: "a12797d4d7fc05d41e8e6c9403c68bff890a497f3b6be1793eac526d7e4752f6"
	I1218 23:29:57.816501 4020685 cri.go:89] found id: "c9dd70f36c4eb293bf4eade6dd5572f67e303fc0d0c67d4be56ace1c5e8f1022"
	I1218 23:29:57.816510 4020685 cri.go:89] found id: "4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	I1218 23:29:57.816516 4020685 cri.go:89] found id: "ce090beec9f33d032454c060633b807f4f48e527381868698dd659568876a342"
	I1218 23:29:57.816526 4020685 cri.go:89] found id: "ed6d8cd967f0788ecabbfc41d2adba3d1ff1687ade20dae72c72314d78adfc7a"
	I1218 23:29:57.816530 4020685 cri.go:89] found id: "20a4491161c86aaa7542b651c6cf9ac91f2212222ba854f55cdb2a7528c7d1f3"
	I1218 23:29:57.816544 4020685 cri.go:89] found id: "cc31a5ffc3f26172d2d2c55c47d28afe9d74094af496c29e4be838e78246b10a"
	I1218 23:29:57.816548 4020685 cri.go:89] found id: "111b78b6df78f7ec01d94768a4f407a45b63b80becff52634ea0adf94d8d9d54"
	I1218 23:29:57.816555 4020685 cri.go:89] found id: "c28d2856689f9f3f41a3169203f6ffaa98b4c51d37101e318368fbcb2c57cd8a"
	I1218 23:29:57.816559 4020685 cri.go:89] found id: "80dcec64bc94c4be300129c5f8f1df06353f3aa7b5ca03eafd1d8bc55821b494"
	I1218 23:29:57.816563 4020685 cri.go:89] found id: ""
	I1218 23:29:57.816612 4020685 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1218 23:29:57.890699 4020685 out.go:177] 
	W1218 23:29:57.892426 4020685 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-18T23:29:57Z" level=error msg="stat /run/containerd/runc/k8s.io/4f46fe23b52fb63657fee268eda5790be64b6e7858b475c41375166a636cb2b4: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-18T23:29:57Z" level=error msg="stat /run/containerd/runc/k8s.io/4f46fe23b52fb63657fee268eda5790be64b6e7858b475c41375166a636cb2b4: no such file or directory"
	
	W1218 23:29:57.892461 4020685 out.go:239] * 
	* 
	W1218 23:29:58.232532 4020685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 23:29:58.235462 4020685 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:639: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-505406 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable volumesnapshots --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-505406
helpers_test.go:235: (dbg) docker inspect addons-505406:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728",
	        "Created": "2023-12-18T23:27:01.007821564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4010841,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T23:27:01.352233717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/hosts",
	        "LogPath": "/var/lib/docker/containers/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728/d20ab8e9043d40a445c5aa42100f8309eba8631b0735965d62c4e8a496dfc728-json.log",
	        "Name": "/addons-505406",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-505406:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-505406",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc-init/diff:/var/lib/docker/overlay2/348b7bce1eeb3fbac023de8c50816ddfb5fe3d6cead44e087fa78b4f572e0dfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a419cbacc723f5d7bca3761d6ad8e915cf6e254b437d4401088e3fe33e3bfdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-505406",
	                "Source": "/var/lib/docker/volumes/addons-505406/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-505406",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-505406",
	                "name.minikube.sigs.k8s.io": "addons-505406",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "544c8b68e7e92c659164975305e0f5f4fe521f9bb758d6e982126866ea4ea66f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42671"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42669"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/544c8b68e7e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-505406": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d20ab8e9043d",
	                        "addons-505406"
	                    ],
	                    "NetworkID": "db63daa0791f94b269968270441aa9a8b30c2c70c5566ef50d71b7852e649ada",
	                    "EndpointID": "9442a295e48eefaf997ea0089a610105213721442a481356e87d507910dc59a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-505406 -n addons-505406
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-505406 logs -n 25: (2.234723003s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:25 UTC |                     |
	|         | -p download-only-037071              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | -p download-only-037071              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | -p download-only-037071              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| delete  | -p download-only-037071              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| delete  | -p download-only-037071              | download-only-037071   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| start   | --download-only -p                   | download-docker-388185 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | download-docker-388185               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-388185            | download-docker-388185 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| start   | --download-only -p                   | binary-mirror-180531   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | binary-mirror-180531                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34359               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-180531              | binary-mirror-180531   | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:26 UTC |
	| addons  | enable dashboard -p                  | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | addons-505406                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |                     |
	|         | addons-505406                        |                        |         |         |                     |                     |
	| start   | -p addons-505406 --wait=true         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC | 18 Dec 23 23:28 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-505406 ip                     | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	| addons  | addons-505406 addons disable         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-505406 addons                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | addons-505406                        |                        |         |         |                     |                     |
	| ssh     | addons-505406 ssh curl -s            | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-505406 ip                     | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	| addons  | addons-505406 addons disable         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-505406 addons disable         | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC |                     |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-505406 addons                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC |                     |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-505406 addons                 | addons-505406          | jenkins | v1.32.0 | 18 Dec 23 23:29 UTC | 18 Dec 23 23:29 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:26:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:26:37.292200 4010377 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:26:37.292409 4010377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:37.292418 4010377 out.go:309] Setting ErrFile to fd 2...
	I1218 23:26:37.292425 4010377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:37.292692 4010377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:26:37.293200 4010377 out.go:303] Setting JSON to false
	I1218 23:26:37.294050 4010377 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":198541,"bootTime":1702743457,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:26:37.294124 4010377 start.go:138] virtualization:  
	I1218 23:26:37.296498 4010377 out.go:177] * [addons-505406] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:26:37.299248 4010377 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:26:37.301246 4010377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:26:37.299439 4010377 notify.go:220] Checking for updates...
	I1218 23:26:37.304942 4010377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:26:37.307028 4010377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:26:37.308811 4010377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:26:37.310563 4010377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:26:37.312861 4010377 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:26:37.340219 4010377 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:26:37.340410 4010377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:37.422107 4010377 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:37.412229734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:37.422206 4010377 docker.go:295] overlay module found
	I1218 23:26:37.425562 4010377 out.go:177] * Using the docker driver based on user configuration
	I1218 23:26:37.427324 4010377 start.go:298] selected driver: docker
	I1218 23:26:37.427343 4010377 start.go:902] validating driver "docker" against <nil>
	I1218 23:26:37.427357 4010377 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:26:37.428016 4010377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:37.494435 4010377 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:37.485016879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:37.494589 4010377 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:26:37.494818 4010377 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 23:26:37.497252 4010377 out.go:177] * Using Docker driver with root privileges
	I1218 23:26:37.499009 4010377 cni.go:84] Creating CNI manager for ""
	I1218 23:26:37.499027 4010377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:26:37.499039 4010377 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:26:37.499056 4010377 start_flags.go:323] config:
	{Name:addons-505406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:26:37.501089 4010377 out.go:177] * Starting control plane node addons-505406 in cluster addons-505406
	I1218 23:26:37.502905 4010377 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1218 23:26:37.504987 4010377 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:26:37.506887 4010377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:26:37.506950 4010377 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1218 23:26:37.506963 4010377 cache.go:56] Caching tarball of preloaded images
	I1218 23:26:37.506971 4010377 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:26:37.507040 4010377 preload.go:174] Found /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 23:26:37.507050 4010377 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1218 23:26:37.507428 4010377 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/config.json ...
	I1218 23:26:37.507457 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/config.json: {Name:mk30dd6bf76cefa6c7749527f9b98923bb68ed32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:26:37.523899 4010377 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:26:37.524034 4010377 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:26:37.524054 4010377 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 23:26:37.524059 4010377 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 23:26:37.524067 4010377 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 23:26:37.524073 4010377 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from local cache
	I1218 23:26:53.557708 4010377 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from cached tarball
	I1218 23:26:53.557750 4010377 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:26:53.557818 4010377 start.go:365] acquiring machines lock for addons-505406: {Name:mk2ccdf55f1151729aacb931c7e8fe9ebfb0ea80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:53.557962 4010377 start.go:369] acquired machines lock for "addons-505406" in 117.653µs
	I1218 23:26:53.557993 4010377 start.go:93] Provisioning new machine with config: &{Name:addons-505406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 23:26:53.558079 4010377 start.go:125] createHost starting for "" (driver="docker")
	I1218 23:26:53.560706 4010377 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1218 23:26:53.561009 4010377 start.go:159] libmachine.API.Create for "addons-505406" (driver="docker")
	I1218 23:26:53.561044 4010377 client.go:168] LocalClient.Create starting
	I1218 23:26:53.561167 4010377 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem
	I1218 23:26:54.124770 4010377 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem
	I1218 23:26:54.703818 4010377 cli_runner.go:164] Run: docker network inspect addons-505406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 23:26:54.720948 4010377 cli_runner.go:211] docker network inspect addons-505406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 23:26:54.721028 4010377 network_create.go:281] running [docker network inspect addons-505406] to gather additional debugging logs...
	I1218 23:26:54.721049 4010377 cli_runner.go:164] Run: docker network inspect addons-505406
	W1218 23:26:54.738053 4010377 cli_runner.go:211] docker network inspect addons-505406 returned with exit code 1
	I1218 23:26:54.738084 4010377 network_create.go:284] error running [docker network inspect addons-505406]: docker network inspect addons-505406: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-505406 not found
	I1218 23:26:54.738098 4010377 network_create.go:286] output of [docker network inspect addons-505406]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-505406 not found
	
	** /stderr **
	I1218 23:26:54.738239 4010377 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:26:54.755228 4010377 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024f8870}
	I1218 23:26:54.755266 4010377 network_create.go:124] attempt to create docker network addons-505406 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 23:26:54.755321 4010377 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-505406 addons-505406
	I1218 23:26:54.825204 4010377 network_create.go:108] docker network addons-505406 192.168.49.0/24 created
	I1218 23:26:54.825237 4010377 kic.go:121] calculated static IP "192.168.49.2" for the "addons-505406" container
	I1218 23:26:54.825326 4010377 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:26:54.841985 4010377 cli_runner.go:164] Run: docker volume create addons-505406 --label name.minikube.sigs.k8s.io=addons-505406 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:26:54.861144 4010377 oci.go:103] Successfully created a docker volume addons-505406
	I1218 23:26:54.861243 4010377 cli_runner.go:164] Run: docker run --rm --name addons-505406-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505406 --entrypoint /usr/bin/test -v addons-505406:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:26:56.728701 4010377 cli_runner.go:217] Completed: docker run --rm --name addons-505406-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505406 --entrypoint /usr/bin/test -v addons-505406:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.867415277s)
	I1218 23:26:56.728739 4010377 oci.go:107] Successfully prepared a docker volume addons-505406
	I1218 23:26:56.728772 4010377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:26:56.728799 4010377 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:26:56.728909 4010377 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-505406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:27:00.917123 4010377 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-505406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.188157128s)
	I1218 23:27:00.917160 4010377 kic.go:203] duration metric: took 4.188357 seconds to extract preloaded images to volume
	W1218 23:27:00.917313 4010377 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:27:00.917424 4010377 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:27:00.990134 4010377 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-505406 --name addons-505406 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505406 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-505406 --network addons-505406 --ip 192.168.49.2 --volume addons-505406:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:27:01.362236 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Running}}
	I1218 23:27:01.393734 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:01.417654 4010377 cli_runner.go:164] Run: docker exec addons-505406 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:27:01.472638 4010377 oci.go:144] the created container "addons-505406" has a running status.
	I1218 23:27:01.472666 4010377 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa...
	I1218 23:27:01.987378 4010377 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:27:02.022934 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:02.047436 4010377 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:27:02.047461 4010377 kic_runner.go:114] Args: [docker exec --privileged addons-505406 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:27:02.110721 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:02.136592 4010377 machine.go:88] provisioning docker machine ...
	I1218 23:27:02.136625 4010377 ubuntu.go:169] provisioning hostname "addons-505406"
	I1218 23:27:02.136699 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:02.168253 4010377 main.go:141] libmachine: Using SSH client type: native
	I1218 23:27:02.168724 4010377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 42671 <nil> <nil>}
	I1218 23:27:02.168748 4010377 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-505406 && echo "addons-505406" | sudo tee /etc/hostname
	I1218 23:27:02.377388 4010377 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-505406
	
	I1218 23:27:02.377490 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:02.405625 4010377 main.go:141] libmachine: Using SSH client type: native
	I1218 23:27:02.406097 4010377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 42671 <nil> <nil>}
	I1218 23:27:02.406120 4010377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-505406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-505406/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-505406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:27:02.566156 4010377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:27:02.566223 4010377 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-4004447/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-4004447/.minikube}
	I1218 23:27:02.566270 4010377 ubuntu.go:177] setting up certificates
	I1218 23:27:02.566304 4010377 provision.go:83] configureAuth start
	I1218 23:27:02.566410 4010377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505406
	I1218 23:27:02.588425 4010377 provision.go:138] copyHostCerts
	I1218 23:27:02.588500 4010377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.pem (1082 bytes)
	I1218 23:27:02.588617 4010377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/cert.pem (1123 bytes)
	I1218 23:27:02.588714 4010377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/key.pem (1675 bytes)
	I1218 23:27:02.588769 4010377 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem org=jenkins.addons-505406 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-505406]
	I1218 23:27:03.113412 4010377 provision.go:172] copyRemoteCerts
	I1218 23:27:03.113510 4010377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:27:03.113559 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.133253 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.239923 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 23:27:03.269748 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 23:27:03.301192 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 23:27:03.331066 4010377 provision.go:86] duration metric: configureAuth took 764.719628ms
	I1218 23:27:03.331125 4010377 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:27:03.331322 4010377 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:03.331336 4010377 machine.go:91] provisioned docker machine in 1.194723437s
	I1218 23:27:03.331343 4010377 client.go:171] LocalClient.Create took 9.770288418s
	I1218 23:27:03.331361 4010377 start.go:167] duration metric: libmachine.API.Create for "addons-505406" took 9.770354156s
	I1218 23:27:03.331374 4010377 start.go:300] post-start starting for "addons-505406" (driver="docker")
	I1218 23:27:03.331383 4010377 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:27:03.331444 4010377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:27:03.331486 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.350193 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.455998 4010377 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:27:03.460402 4010377 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:27:03.460439 4010377 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:27:03.460452 4010377 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:27:03.460459 4010377 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:27:03.460473 4010377 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-4004447/.minikube/addons for local assets ...
	I1218 23:27:03.460544 4010377 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-4004447/.minikube/files for local assets ...
	I1218 23:27:03.460568 4010377 start.go:303] post-start completed in 129.188354ms
	I1218 23:27:03.460995 4010377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505406
	I1218 23:27:03.479570 4010377 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/config.json ...
	I1218 23:27:03.479856 4010377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:27:03.479913 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.498949 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.599109 4010377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:27:03.605029 4010377 start.go:128] duration metric: createHost completed in 10.046933623s
	I1218 23:27:03.605058 4010377 start.go:83] releasing machines lock for "addons-505406", held for 10.047081405s
	I1218 23:27:03.605136 4010377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505406
	I1218 23:27:03.623013 4010377 ssh_runner.go:195] Run: cat /version.json
	I1218 23:27:03.623082 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.623360 4010377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:27:03.623420 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:03.644113 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.644568 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:03.749786 4010377 ssh_runner.go:195] Run: systemctl --version
	I1218 23:27:03.889364 4010377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:27:03.895313 4010377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1218 23:27:03.926200 4010377 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:27:03.926341 4010377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:27:03.960210 4010377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:27:03.960285 4010377 start.go:475] detecting cgroup driver to use...
	I1218 23:27:03.960333 4010377 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:27:03.960414 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 23:27:03.975644 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 23:27:03.990220 4010377 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:27:03.990310 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:27:04.007963 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:27:04.025705 4010377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:27:04.126869 4010377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:27:04.230198 4010377 docker.go:219] disabling docker service ...
	I1218 23:27:04.230311 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:27:04.251627 4010377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:27:04.266235 4010377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:27:04.366233 4010377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:27:04.465595 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:27:04.479754 4010377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:27:04.499523 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 23:27:04.511774 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 23:27:04.524043 4010377 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 23:27:04.524164 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 23:27:04.536144 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 23:27:04.548239 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 23:27:04.560652 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 23:27:04.573219 4010377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:27:04.584841 4010377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 23:27:04.596662 4010377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:27:04.607006 4010377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:27:04.618976 4010377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:27:04.716224 4010377 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 23:27:04.863732 4010377 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 23:27:04.863874 4010377 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 23:27:04.868966 4010377 start.go:543] Will wait 60s for crictl version
	I1218 23:27:04.869054 4010377 ssh_runner.go:195] Run: which crictl
	I1218 23:27:04.873775 4010377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:27:04.918091 4010377 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1218 23:27:04.918182 4010377 ssh_runner.go:195] Run: containerd --version
	I1218 23:27:04.948293 4010377 ssh_runner.go:195] Run: containerd --version
	I1218 23:27:04.981532 4010377 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I1218 23:27:04.983544 4010377 cli_runner.go:164] Run: docker network inspect addons-505406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:27:05.004333 4010377 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 23:27:05.012704 4010377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:27:05.028403 4010377 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:27:05.028486 4010377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:27:05.072679 4010377 containerd.go:604] all images are preloaded for containerd runtime.
	I1218 23:27:05.072712 4010377 containerd.go:518] Images already preloaded, skipping extraction
	I1218 23:27:05.072773 4010377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:27:05.120543 4010377 containerd.go:604] all images are preloaded for containerd runtime.
	I1218 23:27:05.120569 4010377 cache_images.go:84] Images are preloaded, skipping loading
	I1218 23:27:05.120641 4010377 ssh_runner.go:195] Run: sudo crictl info
	I1218 23:27:05.166401 4010377 cni.go:84] Creating CNI manager for ""
	I1218 23:27:05.166433 4010377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:27:05.166469 4010377 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:27:05.166494 4010377 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-505406 NodeName:addons-505406 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 23:27:05.166643 4010377 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-505406"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:27:05.166717 4010377 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-505406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:27:05.166794 4010377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 23:27:05.179463 4010377 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:27:05.179549 4010377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 23:27:05.191775 4010377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1218 23:27:05.213964 4010377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 23:27:05.236025 4010377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1218 23:27:05.257979 4010377 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:27:05.262618 4010377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:27:05.276496 4010377 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406 for IP: 192.168.49.2
	I1218 23:27:05.276532 4010377 certs.go:190] acquiring lock for shared ca certs: {Name:mk406b12e6a80d6e5757943ee55b3a3d6680c96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.277056 4010377 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key
	I1218 23:27:05.486030 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt ...
	I1218 23:27:05.486068 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt: {Name:mk0fb448f34fc36bba3ee3d1f11cdce25cc0aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.486723 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key ...
	I1218 23:27:05.486740 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key: {Name:mkb39ae66a6f7eae1fc2542e2fcbf85ec3cb4e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.486840 4010377 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key
	I1218 23:27:05.932805 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt ...
	I1218 23:27:05.932836 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt: {Name:mk8477f7cbd8fcd5d4657b7e1a7890f13d74f9a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.933448 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key ...
	I1218 23:27:05.933464 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key: {Name:mk9ba7df0fb5db06291706011b4208407cde640c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:05.933593 4010377 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.key
	I1218 23:27:05.933609 4010377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt with IP's: []
	I1218 23:27:06.771188 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt ...
	I1218 23:27:06.771220 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: {Name:mk18a7486c38f159230614dfdce1d43c34517f87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.771813 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.key ...
	I1218 23:27:06.771829 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.key: {Name:mk253971840ff54c5df7f7f76c6a1b6039ee2e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.771936 4010377 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2
	I1218 23:27:06.771963 4010377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 23:27:06.959970 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2 ...
	I1218 23:27:06.959999 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2: {Name:mkced843377e0a244fcc135d677912e5779f319b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.960191 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2 ...
	I1218 23:27:06.960207 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2: {Name:mk1247b4bf884a01409848889e4f75dce1a04f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:06.960746 4010377 certs.go:337] copying /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt
	I1218 23:27:06.960832 4010377 certs.go:341] copying /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key
	I1218 23:27:06.960912 4010377 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key
	I1218 23:27:06.960929 4010377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt with IP's: []
	I1218 23:27:07.264320 4010377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt ...
	I1218 23:27:07.264350 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt: {Name:mka4a15fe590746f54c0c23809a71c18bb8a3577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:07.264542 4010377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key ...
	I1218 23:27:07.264555 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key: {Name:mkc2db8121f13298e9e1f44a66f0c29b401aea67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:07.265146 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:27:07.265197 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem (1082 bytes)
	I1218 23:27:07.265226 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:27:07.265255 4010377 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem (1675 bytes)
	I1218 23:27:07.265852 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 23:27:07.297421 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 23:27:07.327843 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 23:27:07.357647 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 23:27:07.386631 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:27:07.415095 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 23:27:07.443089 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:27:07.470907 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 23:27:07.499771 4010377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:27:07.529018 4010377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 23:27:07.550886 4010377 ssh_runner.go:195] Run: openssl version
	I1218 23:27:07.557878 4010377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:27:07.569789 4010377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:27:07.574488 4010377 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:27:07.574586 4010377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:27:07.583555 4010377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:27:07.595745 4010377 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:27:07.600203 4010377 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:27:07.600250 4010377 kubeadm.go:404] StartCluster: {Name:addons-505406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-505406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:27:07.600374 4010377 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 23:27:07.600453 4010377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:27:07.643559 4010377 cri.go:89] found id: ""
	I1218 23:27:07.643674 4010377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 23:27:07.654515 4010377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 23:27:07.665880 4010377 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 23:27:07.665974 4010377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 23:27:07.677579 4010377 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:27:07.677627 4010377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 23:27:07.730623 4010377 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 23:27:07.730942 4010377 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 23:27:07.779131 4010377 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:27:07.779255 4010377 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:27:07.779299 4010377 kubeadm.go:322] OS: Linux
	I1218 23:27:07.779354 4010377 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 23:27:07.779408 4010377 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 23:27:07.779461 4010377 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 23:27:07.779513 4010377 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 23:27:07.779566 4010377 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 23:27:07.779622 4010377 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 23:27:07.779672 4010377 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1218 23:27:07.779725 4010377 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1218 23:27:07.779776 4010377 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1218 23:27:07.864445 4010377 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:27:07.864592 4010377 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:27:07.864715 4010377 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:27:08.141238 4010377 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:27:08.145495 4010377 out.go:204]   - Generating certificates and keys ...
	I1218 23:27:08.145599 4010377 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 23:27:08.145683 4010377 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 23:27:08.581339 4010377 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:27:09.861125 4010377 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:27:10.221815 4010377 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 23:27:10.759553 4010377 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 23:27:11.054183 4010377 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 23:27:11.054498 4010377 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-505406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:27:11.293424 4010377 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 23:27:11.293791 4010377 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-505406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:27:11.595672 4010377 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:27:11.809329 4010377 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:27:12.161010 4010377 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 23:27:12.161284 4010377 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:27:12.422361 4010377 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:27:12.675402 4010377 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:27:14.017890 4010377 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:27:14.957991 4010377 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:27:14.958777 4010377 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:27:14.962831 4010377 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:27:14.965282 4010377 out.go:204]   - Booting up control plane ...
	I1218 23:27:14.965381 4010377 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:27:14.965455 4010377 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:27:14.965883 4010377 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:27:14.980907 4010377 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:27:14.981723 4010377 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:27:14.981954 4010377 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 23:27:15.108287 4010377 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:27:23.113211 4010377 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003218 seconds
	I1218 23:27:23.113327 4010377 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:27:23.134951 4010377 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:27:23.660518 4010377 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:27:23.661011 4010377 kubeadm.go:322] [mark-control-plane] Marking the node addons-505406 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 23:27:24.173381 4010377 kubeadm.go:322] [bootstrap-token] Using token: 2pck3j.iwjhkdhxathh9tdv
	I1218 23:27:24.175424 4010377 out.go:204]   - Configuring RBAC rules ...
	I1218 23:27:24.175549 4010377 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:27:24.181106 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:27:24.190732 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:27:24.194705 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:27:24.199601 4010377 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:27:24.205042 4010377 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:27:24.216843 4010377 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:27:24.458364 4010377 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 23:27:24.588212 4010377 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 23:27:24.589384 4010377 kubeadm.go:322] 
	I1218 23:27:24.589464 4010377 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 23:27:24.589478 4010377 kubeadm.go:322] 
	I1218 23:27:24.589551 4010377 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 23:27:24.589563 4010377 kubeadm.go:322] 
	I1218 23:27:24.589588 4010377 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 23:27:24.589648 4010377 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:27:24.589702 4010377 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:27:24.589711 4010377 kubeadm.go:322] 
	I1218 23:27:24.589770 4010377 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 23:27:24.589803 4010377 kubeadm.go:322] 
	I1218 23:27:24.589901 4010377 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 23:27:24.589926 4010377 kubeadm.go:322] 
	I1218 23:27:24.589976 4010377 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 23:27:24.590050 4010377 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:27:24.590118 4010377 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:27:24.590127 4010377 kubeadm.go:322] 
	I1218 23:27:24.590486 4010377 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:27:24.590626 4010377 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 23:27:24.590637 4010377 kubeadm.go:322] 
	I1218 23:27:24.590767 4010377 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2pck3j.iwjhkdhxathh9tdv \
	I1218 23:27:24.590894 4010377 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e312defcfa05e02bd60f7c592d29d4c5d570ecf2885804f11be3cfbfa6eee99b \
	I1218 23:27:24.590917 4010377 kubeadm.go:322] 	--control-plane 
	I1218 23:27:24.590922 4010377 kubeadm.go:322] 
	I1218 23:27:24.591009 4010377 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:27:24.591020 4010377 kubeadm.go:322] 
	I1218 23:27:24.591164 4010377 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2pck3j.iwjhkdhxathh9tdv \
	I1218 23:27:24.591316 4010377 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e312defcfa05e02bd60f7c592d29d4c5d570ecf2885804f11be3cfbfa6eee99b 
	I1218 23:27:24.595552 4010377 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:27:24.595663 4010377 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:27:24.595679 4010377 cni.go:84] Creating CNI manager for ""
	I1218 23:27:24.595687 4010377 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:27:24.597875 4010377 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 23:27:24.599877 4010377 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:27:24.607018 4010377 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 23:27:24.607040 4010377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:27:24.638467 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:27:25.592864 4010377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 23:27:25.593031 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:25.593120 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=addons-505406 minikube.k8s.io/updated_at=2023_12_18T23_27_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:25.610811 4010377 ops.go:34] apiserver oom_adj: -16
	I1218 23:27:25.794925 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:26.295909 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:26.795050 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:27.295489 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:27.795454 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:28.295540 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:28.795709 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:29.295075 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:29.795566 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:30.295216 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:30.795968 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:31.295421 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:31.794981 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:32.295209 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:32.795094 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:33.295036 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:33.795544 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:34.295311 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:34.795454 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:35.295972 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:35.795250 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:36.295068 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:36.795020 4010377 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:27:36.940690 4010377 kubeadm.go:1088] duration metric: took 11.347713969s to wait for elevateKubeSystemPrivileges.
	I1218 23:27:36.940714 4010377 kubeadm.go:406] StartCluster complete in 29.340467938s
	I1218 23:27:36.940758 4010377 settings.go:142] acquiring lock: {Name:mkc0bc26fbf229b708fca267aea9769f0f259f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:36.941396 4010377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:27:36.941899 4010377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/kubeconfig: {Name:mk056ad1e9e70ee26734d70551bb1d18ee8e2c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:27:36.942609 4010377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 23:27:36.942893 4010377 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:36.943066 4010377 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1218 23:27:36.943186 4010377 addons.go:69] Setting volumesnapshots=true in profile "addons-505406"
	I1218 23:27:36.943201 4010377 addons.go:231] Setting addon volumesnapshots=true in "addons-505406"
	I1218 23:27:36.943242 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.943728 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.944253 4010377 addons.go:69] Setting cloud-spanner=true in profile "addons-505406"
	I1218 23:27:36.944277 4010377 addons.go:231] Setting addon cloud-spanner=true in "addons-505406"
	I1218 23:27:36.944333 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.944814 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945886 4010377 addons.go:69] Setting metrics-server=true in profile "addons-505406"
	I1218 23:27:36.945987 4010377 addons.go:231] Setting addon metrics-server=true in "addons-505406"
	I1218 23:27:36.946048 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.946553 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.946954 4010377 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-505406"
	I1218 23:27:36.946976 4010377 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-505406"
	I1218 23:27:36.947025 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.947433 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945918 4010377 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-505406"
	I1218 23:27:36.967686 4010377 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-505406"
	I1218 23:27:36.967785 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.968351 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.969086 4010377 addons.go:69] Setting registry=true in profile "addons-505406"
	I1218 23:27:36.969117 4010377 addons.go:231] Setting addon registry=true in "addons-505406"
	I1218 23:27:36.969170 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.969703 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.996081 4010377 addons.go:69] Setting storage-provisioner=true in profile "addons-505406"
	I1218 23:27:36.996113 4010377 addons.go:231] Setting addon storage-provisioner=true in "addons-505406"
	I1218 23:27:36.996165 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:36.996614 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945931 4010377 addons.go:69] Setting default-storageclass=true in profile "addons-505406"
	I1218 23:27:37.005238 4010377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-505406"
	I1218 23:27:37.005647 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.017015 4010377 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-505406"
	I1218 23:27:37.017062 4010377 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-505406"
	I1218 23:27:37.017461 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:36.945946 4010377 addons.go:69] Setting gcp-auth=true in profile "addons-505406"
	I1218 23:27:37.024814 4010377 mustload.go:65] Loading cluster: addons-505406
	I1218 23:27:36.945955 4010377 addons.go:69] Setting ingress=true in profile "addons-505406"
	I1218 23:27:36.945961 4010377 addons.go:69] Setting ingress-dns=true in profile "addons-505406"
	I1218 23:27:36.945972 4010377 addons.go:69] Setting inspektor-gadget=true in profile "addons-505406"
	I1218 23:27:37.044425 4010377 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1218 23:27:37.050498 4010377 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1218 23:27:37.050566 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1218 23:27:37.050676 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.069167 4010377 config.go:182] Loaded profile config "addons-505406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:37.069625 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.087123 4010377 addons.go:231] Setting addon ingress=true in "addons-505406"
	I1218 23:27:37.087216 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.087698 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.120554 4010377 addons.go:231] Setting addon ingress-dns=true in "addons-505406"
	I1218 23:27:37.120666 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.125929 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.168521 4010377 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1218 23:27:37.188300 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1218 23:27:37.198677 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1218 23:27:37.198835 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.157534 4010377 addons.go:231] Setting addon inspektor-gadget=true in "addons-505406"
	I1218 23:27:37.199897 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.200461 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.219446 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1218 23:27:37.223237 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1218 23:27:37.223305 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1218 23:27:37.223410 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.254535 4010377 addons.go:231] Setting addon default-storageclass=true in "addons-505406"
	I1218 23:27:37.254577 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.255058 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.258586 4010377 out.go:177]   - Using image docker.io/registry:2.8.3
	I1218 23:27:37.266168 4010377 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1218 23:27:37.268172 4010377 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 23:27:37.268241 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1218 23:27:37.268346 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.284780 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1218 23:27:37.287749 4010377 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1218 23:27:37.287770 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1218 23:27:37.287838 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.285051 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1218 23:27:37.360782 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1218 23:27:37.365723 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1218 23:27:37.367851 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1218 23:27:37.372998 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1218 23:27:37.374835 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1218 23:27:37.376594 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1218 23:27:37.378785 4010377 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1218 23:27:37.375810 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.388948 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:27:37.391869 4010377 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:27:37.391885 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 23:27:37.391962 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.389142 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1218 23:27:37.397629 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1218 23:27:37.397737 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.382940 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.419539 4010377 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-505406"
	I1218 23:27:37.419577 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:37.420020 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:37.475040 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:27:37.481071 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:27:37.483051 4010377 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1218 23:27:37.490871 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1218 23:27:37.490909 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1218 23:27:37.490994 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.496965 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1218 23:27:37.502768 4010377 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 23:27:37.502805 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1218 23:27:37.502892 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.512346 4010377 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-505406" context rescaled to 1 replicas
	I1218 23:27:37.512405 4010377 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 23:27:37.521431 4010377 out.go:177] * Verifying Kubernetes components...
	I1218 23:27:37.525317 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:27:37.536128 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.546432 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1218 23:27:37.549405 4010377 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 23:27:37.549435 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1218 23:27:37.549526 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.597729 4010377 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 23:27:37.597751 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 23:27:37.597891 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.603779 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.617171 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.633099 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.696081 4010377 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1218 23:27:37.700183 4010377 out.go:177]   - Using image docker.io/busybox:stable
	I1218 23:27:37.696624 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.702835 4010377 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 23:27:37.703549 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1218 23:27:37.703647 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:37.751697 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.758154 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.801191 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.810147 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.828835 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:37.848605 4010377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 23:27:37.854466 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	W1218 23:27:37.855626 4010377 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1218 23:27:37.855682 4010377 retry.go:31] will retry after 212.486282ms: ssh: handshake failed: EOF
	I1218 23:27:38.012818 4010377 node_ready.go:35] waiting up to 6m0s for node "addons-505406" to be "Ready" ...
	I1218 23:27:38.018940 4010377 node_ready.go:49] node "addons-505406" has status "Ready":"True"
	I1218 23:27:38.018979 4010377 node_ready.go:38] duration metric: took 6.078168ms waiting for node "addons-505406" to be "Ready" ...
	I1218 23:27:38.018992 4010377 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:27:38.049703 4010377 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace to be "Ready" ...
	I1218 23:27:38.475647 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 23:27:38.522042 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1218 23:27:38.522076 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1218 23:27:38.550513 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 23:27:38.606326 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1218 23:27:38.606356 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1218 23:27:38.619325 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 23:27:38.630005 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1218 23:27:38.669075 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 23:27:38.689616 4010377 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1218 23:27:38.689650 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1218 23:27:38.690754 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1218 23:27:38.690774 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1218 23:27:38.715954 4010377 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1218 23:27:38.715981 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1218 23:27:38.839423 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1218 23:27:38.839451 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1218 23:27:38.879262 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:27:38.937136 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1218 23:27:38.937172 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1218 23:27:38.977026 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1218 23:27:38.977060 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1218 23:27:39.023664 4010377 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1218 23:27:39.023735 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1218 23:27:39.041843 4010377 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1218 23:27:39.041868 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1218 23:27:39.080016 4010377 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1218 23:27:39.080054 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1218 23:27:39.105237 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1218 23:27:39.105263 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1218 23:27:39.135881 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 23:27:39.147912 4010377 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:27:39.147947 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1218 23:27:39.170261 4010377 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 23:27:39.170293 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1218 23:27:39.292370 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1218 23:27:39.292406 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1218 23:27:39.305924 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:27:39.314005 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1218 23:27:39.321383 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1218 23:27:39.321411 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1218 23:27:39.392763 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 23:27:39.496691 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1218 23:27:39.496727 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1218 23:27:39.555986 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1218 23:27:39.556022 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1218 23:27:39.781539 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1218 23:27:39.781573 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1218 23:27:39.807632 4010377 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1218 23:27:39.807666 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1218 23:27:39.854732 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1218 23:27:39.854758 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1218 23:27:39.933268 4010377 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 23:27:39.933290 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1218 23:27:39.935028 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1218 23:27:39.935046 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1218 23:27:40.053414 4010377 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5d47t" not found
	I1218 23:27:40.053453 4010377 pod_ready.go:81] duration metric: took 2.003704039s waiting for pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace to be "Ready" ...
	E1218 23:27:40.053466 4010377 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5d47t" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5d47t" not found
	I1218 23:27:40.053474 4010377 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace to be "Ready" ...
	I1218 23:27:40.201187 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1218 23:27:40.201212 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1218 23:27:40.239561 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 23:27:40.243222 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1218 23:27:40.243246 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1218 23:27:40.523284 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1218 23:27:40.523308 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1218 23:27:40.802941 4010377 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 23:27:40.802978 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1218 23:27:40.859184 4010377 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.010530764s)
	I1218 23:27:40.859223 4010377 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 23:27:40.859274 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.383543388s)
	I1218 23:27:40.996717 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 23:27:42.076240 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:42.382078 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.831519411s)
	I1218 23:27:44.230488 4010377 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1218 23:27:44.230593 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:44.264991 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:44.563648 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:44.633526 4010377 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1218 23:27:44.738103 4010377 addons.go:231] Setting addon gcp-auth=true in "addons-505406"
	I1218 23:27:44.738203 4010377 host.go:66] Checking if "addons-505406" exists ...
	I1218 23:27:44.738753 4010377 cli_runner.go:164] Run: docker container inspect addons-505406 --format={{.State.Status}}
	I1218 23:27:44.769206 4010377 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1218 23:27:44.769257 4010377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505406
	I1218 23:27:44.801102 4010377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42671 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/addons-505406/id_rsa Username:docker}
	I1218 23:27:44.902199 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.27215206s)
	I1218 23:27:44.902259 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.233161896s)
	I1218 23:27:44.902311 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.023007583s)
	I1218 23:27:44.902338 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.766432241s)
	I1218 23:27:44.902568 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596613208s)
	W1218 23:27:44.902590 4010377 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 23:27:44.902605 4010377 retry.go:31] will retry after 238.498404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 23:27:44.902635 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.588604215s)
	I1218 23:27:44.902644 4010377 addons.go:467] Verifying addon registry=true in "addons-505406"
	I1218 23:27:44.905453 4010377 out.go:177] * Verifying registry addon...
	I1218 23:27:44.903071 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.510276629s)
	I1218 23:27:44.903162 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.663567968s)
	I1218 23:27:44.903577 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.284217075s)
	I1218 23:27:44.907384 4010377 addons.go:467] Verifying addon ingress=true in "addons-505406"
	I1218 23:27:44.909551 4010377 out.go:177] * Verifying ingress addon...
	I1218 23:27:44.907494 4010377 addons.go:467] Verifying addon metrics-server=true in "addons-505406"
	I1218 23:27:44.908279 4010377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1218 23:27:44.912280 4010377 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1218 23:27:44.923171 4010377 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 23:27:44.924393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:44.924162 4010377 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1218 23:27:44.924467 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:45.141410 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:27:45.419677 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:45.421552 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:45.921555 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:45.931092 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:46.424103 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:46.424814 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:46.569163 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:46.628381 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.631591619s)
	I1218 23:27:46.628477 4010377 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-505406"
	I1218 23:27:46.631043 4010377 out.go:177] * Verifying csi-hostpath-driver addon...
	I1218 23:27:46.628715 4010377 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.859486779s)
	I1218 23:27:46.634674 4010377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1218 23:27:46.637411 4010377 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:27:46.639499 4010377 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1218 23:27:46.642155 4010377 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1218 23:27:46.642230 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1218 23:27:46.645503 4010377 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 23:27:46.645582 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:46.730970 4010377 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1218 23:27:46.731043 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1218 23:27:46.806198 4010377 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 23:27:46.806265 4010377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1218 23:27:46.880199 4010377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 23:27:46.919331 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:46.922116 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:47.143897 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:47.418412 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:47.420585 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:47.436371 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.294805029s)
	I1218 23:27:47.642170 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:47.924324 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:47.927953 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:48.013355 4010377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.13306904s)
	I1218 23:27:48.017018 4010377 addons.go:467] Verifying addon gcp-auth=true in "addons-505406"
	I1218 23:27:48.021080 4010377 out.go:177] * Verifying gcp-auth addon...
	I1218 23:27:48.024258 4010377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1218 23:27:48.035888 4010377 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1218 23:27:48.035956 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:48.146985 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:48.418551 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:48.419746 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:48.528279 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:48.641370 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:48.917408 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:48.918369 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:49.028282 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:49.060562 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:49.141580 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:49.420301 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:49.421720 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:49.528284 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:49.641348 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:49.918380 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:49.919899 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:50.029149 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:50.141871 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:50.418938 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:50.419565 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:50.529043 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:50.641840 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:50.918711 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:50.924106 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:51.030325 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:51.062305 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:51.141004 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:51.420381 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:51.423611 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:51.530602 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:51.641326 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:51.919506 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:51.921135 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:52.030913 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:52.146887 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:52.419107 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:52.419617 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:52.528957 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:52.640549 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:52.918921 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:52.921464 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:53.028238 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:53.144708 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:53.419728 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:53.421584 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:53.527988 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:53.560844 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:53.641623 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:53.918514 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:53.919582 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:54.029315 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:54.146835 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:54.418216 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:54.418346 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:54.528074 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:54.642480 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:54.919058 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:54.920428 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:55.028333 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:55.143884 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:55.416731 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:55.417648 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:55.528222 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:55.640804 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:55.916398 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:55.917876 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:56.029039 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:56.060340 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:56.140964 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:56.420044 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:56.421804 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:56.528516 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:56.641038 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:56.921216 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:56.922294 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:57.028319 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:57.144445 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:57.423297 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:57.423467 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:57.527921 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:57.641340 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:57.917878 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:57.918234 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:58.028592 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:58.144985 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:58.417343 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:58.417582 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:58.527893 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:58.560309 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:27:58.641543 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:58.917246 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:58.919822 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:59.028575 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:59.140838 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:59.416415 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:59.416921 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:27:59.528507 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:27:59.642061 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:27:59.917321 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:27:59.917848 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:00.060270 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:00.188171 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:00.423766 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:00.425821 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:00.528854 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:00.560415 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:00.641147 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:00.918034 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:00.918830 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:01.028131 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:01.141289 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:01.417071 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:01.417350 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:01.528863 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:01.642124 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:01.917393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:01.918212 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:02.027913 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:02.141591 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:02.416582 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:02.417885 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:02.528585 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:02.560626 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:02.640250 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:02.916272 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:02.916803 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:03.028371 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:03.146658 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:03.418029 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:03.418220 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:03.528801 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:03.641626 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:03.916910 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:03.918684 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:04.028943 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:04.141304 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:04.417424 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:04.418289 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:04.527869 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:04.560773 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:04.641050 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:04.917241 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:04.919964 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:05.028584 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:05.151301 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:05.417621 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:05.418207 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:05.529278 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:05.640766 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:05.916766 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:05.918063 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:06.028930 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:06.147081 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:06.417777 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:06.419164 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:06.528835 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:06.641685 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:06.917204 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:06.918184 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:07.028180 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:07.060381 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:07.141209 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:07.416609 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:07.418039 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:07.528413 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:07.640155 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:07.917806 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:07.918716 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:08.028496 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:08.139939 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:08.416747 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:08.418935 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:08.528684 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:08.640409 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:08.916526 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:08.917443 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:09.028175 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:09.061048 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:09.143868 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:09.417393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:09.417526 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:09.527969 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:09.640681 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:09.919519 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:09.921483 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:10.028803 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:10.141341 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:10.416562 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:10.418131 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:10.528334 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:10.641220 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:10.916059 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:10.917275 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:11.028188 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:11.061198 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:11.140799 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:11.416663 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:11.418635 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:11.528370 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:11.641470 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:11.917825 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:11.918749 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:12.028670 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:12.147346 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:12.419427 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:12.420015 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:12.528298 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:12.642136 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:12.924857 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:12.926175 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:13.030603 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:13.061870 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:13.152709 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:13.423204 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:13.423466 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:13.529138 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:13.643726 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:13.922382 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:13.923804 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:14.029824 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:14.145393 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:14.420605 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:14.422444 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:14.529282 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:14.643887 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:14.939208 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:14.941110 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:15.031313 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:15.070391 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:15.145706 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:15.417834 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:15.418750 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:15.528820 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:15.643698 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:15.920037 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:15.921052 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:16.028453 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:16.143488 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:16.417693 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:16.418436 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:16.528910 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:16.641559 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:16.922043 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:16.922652 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:17.029129 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:17.147337 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:17.417443 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:17.419901 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:17.529014 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:17.560348 4010377 pod_ready.go:102] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"False"
	I1218 23:28:17.642194 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:17.922512 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:17.923405 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:18.038443 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:18.146432 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:18.428907 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:18.429928 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:18.528620 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:18.560616 4010377 pod_ready.go:92] pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.560644 4010377 pod_ready.go:81] duration metric: took 38.507160518s waiting for pod "coredns-5dd5756b68-gz5tv" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.560659 4010377 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.567454 4010377 pod_ready.go:92] pod "etcd-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.567480 4010377 pod_ready.go:81] duration metric: took 6.813737ms waiting for pod "etcd-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.567495 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.574360 4010377 pod_ready.go:92] pod "kube-apiserver-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.574386 4010377 pod_ready.go:81] duration metric: took 6.881601ms waiting for pod "kube-apiserver-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.574400 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.580936 4010377 pod_ready.go:92] pod "kube-controller-manager-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.580964 4010377 pod_ready.go:81] duration metric: took 6.55563ms waiting for pod "kube-controller-manager-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.580977 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w7pxw" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.588894 4010377 pod_ready.go:92] pod "kube-proxy-w7pxw" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.588920 4010377 pod_ready.go:81] duration metric: took 7.934935ms waiting for pod "kube-proxy-w7pxw" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.588933 4010377 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.640525 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:18.918205 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:18.919599 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:18.958545 4010377 pod_ready.go:92] pod "kube-scheduler-addons-505406" in "kube-system" namespace has status "Ready":"True"
	I1218 23:28:18.958619 4010377 pod_ready.go:81] duration metric: took 369.676303ms waiting for pod "kube-scheduler-addons-505406" in "kube-system" namespace to be "Ready" ...
	I1218 23:28:18.958646 4010377 pod_ready.go:38] duration metric: took 40.939640488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:28:18.958693 4010377 api_server.go:52] waiting for apiserver process to appear ...
	I1218 23:28:18.958793 4010377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:28:18.978375 4010377 api_server.go:72] duration metric: took 41.465925889s to wait for apiserver process to appear ...
	I1218 23:28:18.978454 4010377 api_server.go:88] waiting for apiserver healthz status ...
	I1218 23:28:18.978487 4010377 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 23:28:18.988691 4010377 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 23:28:18.990295 4010377 api_server.go:141] control plane version: v1.28.4
	I1218 23:28:18.990322 4010377 api_server.go:131] duration metric: took 11.848195ms to wait for apiserver health ...
	I1218 23:28:18.990332 4010377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 23:28:19.029125 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:19.140488 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:19.165260 4010377 system_pods.go:59] 18 kube-system pods found
	I1218 23:28:19.165390 4010377 system_pods.go:61] "coredns-5dd5756b68-gz5tv" [e63b5341-2e55-47f0-b88e-dc22e0403e80] Running
	I1218 23:28:19.165416 4010377 system_pods.go:61] "csi-hostpath-attacher-0" [87e738f3-e48f-4316-8ed7-ccccd9114b41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1218 23:28:19.165456 4010377 system_pods.go:61] "csi-hostpath-resizer-0" [cb9a514e-9677-434d-b771-76f09efcd2f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1218 23:28:19.165487 4010377 system_pods.go:61] "csi-hostpathplugin-kwqtb" [2409be6f-8c39-4c22-b0d9-125994297ab2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 23:28:19.165509 4010377 system_pods.go:61] "etcd-addons-505406" [6034f836-b02e-4727-b166-5aa9fb36bbf4] Running
	I1218 23:28:19.165531 4010377 system_pods.go:61] "kindnet-ktkh2" [54313ed0-0489-48b5-93c3-351993a995c9] Running
	I1218 23:28:19.165562 4010377 system_pods.go:61] "kube-apiserver-addons-505406" [fc22a2d0-866e-4268-b6fe-1fb26e29631e] Running
	I1218 23:28:19.165587 4010377 system_pods.go:61] "kube-controller-manager-addons-505406" [8240dd25-f7d5-48d9-836a-fed1350af622] Running
	I1218 23:28:19.165616 4010377 system_pods.go:61] "kube-ingress-dns-minikube" [3a5f8190-8536-42e5-b817-a63d75a1d1b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 23:28:19.165641 4010377 system_pods.go:61] "kube-proxy-w7pxw" [9c0fe76b-5b4a-4787-8efb-4ec3fd477fa7] Running
	I1218 23:28:19.165673 4010377 system_pods.go:61] "kube-scheduler-addons-505406" [bb60524d-bc22-4d01-8eee-bf44e27d12d2] Running
	I1218 23:28:19.165703 4010377 system_pods.go:61] "metrics-server-7c66d45ddc-zjzgk" [d76dd58c-71b2-415f-b318-39b1117343c1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 23:28:19.165730 4010377 system_pods.go:61] "nvidia-device-plugin-daemonset-sr2zs" [5b296706-5778-44a8-a5fe-4eeaec480f20] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 23:28:19.165759 4010377 system_pods.go:61] "registry-proxy-x7nmz" [953b9840-520c-42b3-8b05-574b76391cd3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 23:28:19.165795 4010377 system_pods.go:61] "registry-wzr22" [24ef522d-90a4-4844-810a-182a22d8094c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 23:28:19.165820 4010377 system_pods.go:61] "snapshot-controller-58dbcc7b99-r8g6k" [ec8f8883-b1e3-4610-9fd1-e0eafac8e50a] Running
	I1218 23:28:19.165845 4010377 system_pods.go:61] "snapshot-controller-58dbcc7b99-z5h7b" [981d723e-25af-46f7-8a80-2251c7aad093] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 23:28:19.165878 4010377 system_pods.go:61] "storage-provisioner" [77221557-03fd-4a34-aa8b-c096521c83e3] Running
	I1218 23:28:19.165904 4010377 system_pods.go:74] duration metric: took 175.564322ms to wait for pod list to return data ...
	I1218 23:28:19.165928 4010377 default_sa.go:34] waiting for default service account to be created ...
	I1218 23:28:19.356973 4010377 default_sa.go:45] found service account: "default"
	I1218 23:28:19.356999 4010377 default_sa.go:55] duration metric: took 191.051211ms for default service account to be created ...
	I1218 23:28:19.357011 4010377 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 23:28:19.416774 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:19.418741 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:19.528413 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:19.563887 4010377 system_pods.go:86] 18 kube-system pods found
	I1218 23:28:19.563920 4010377 system_pods.go:89] "coredns-5dd5756b68-gz5tv" [e63b5341-2e55-47f0-b88e-dc22e0403e80] Running
	I1218 23:28:19.563930 4010377 system_pods.go:89] "csi-hostpath-attacher-0" [87e738f3-e48f-4316-8ed7-ccccd9114b41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1218 23:28:19.563940 4010377 system_pods.go:89] "csi-hostpath-resizer-0" [cb9a514e-9677-434d-b771-76f09efcd2f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1218 23:28:19.563952 4010377 system_pods.go:89] "csi-hostpathplugin-kwqtb" [2409be6f-8c39-4c22-b0d9-125994297ab2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 23:28:19.563964 4010377 system_pods.go:89] "etcd-addons-505406" [6034f836-b02e-4727-b166-5aa9fb36bbf4] Running
	I1218 23:28:19.563974 4010377 system_pods.go:89] "kindnet-ktkh2" [54313ed0-0489-48b5-93c3-351993a995c9] Running
	I1218 23:28:19.563979 4010377 system_pods.go:89] "kube-apiserver-addons-505406" [fc22a2d0-866e-4268-b6fe-1fb26e29631e] Running
	I1218 23:28:19.563988 4010377 system_pods.go:89] "kube-controller-manager-addons-505406" [8240dd25-f7d5-48d9-836a-fed1350af622] Running
	I1218 23:28:19.563997 4010377 system_pods.go:89] "kube-ingress-dns-minikube" [3a5f8190-8536-42e5-b817-a63d75a1d1b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 23:28:19.564003 4010377 system_pods.go:89] "kube-proxy-w7pxw" [9c0fe76b-5b4a-4787-8efb-4ec3fd477fa7] Running
	I1218 23:28:19.564014 4010377 system_pods.go:89] "kube-scheduler-addons-505406" [bb60524d-bc22-4d01-8eee-bf44e27d12d2] Running
	I1218 23:28:19.564022 4010377 system_pods.go:89] "metrics-server-7c66d45ddc-zjzgk" [d76dd58c-71b2-415f-b318-39b1117343c1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 23:28:19.564032 4010377 system_pods.go:89] "nvidia-device-plugin-daemonset-sr2zs" [5b296706-5778-44a8-a5fe-4eeaec480f20] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 23:28:19.564040 4010377 system_pods.go:89] "registry-proxy-x7nmz" [953b9840-520c-42b3-8b05-574b76391cd3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 23:28:19.564047 4010377 system_pods.go:89] "registry-wzr22" [24ef522d-90a4-4844-810a-182a22d8094c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 23:28:19.564054 4010377 system_pods.go:89] "snapshot-controller-58dbcc7b99-r8g6k" [ec8f8883-b1e3-4610-9fd1-e0eafac8e50a] Running
	I1218 23:28:19.564062 4010377 system_pods.go:89] "snapshot-controller-58dbcc7b99-z5h7b" [981d723e-25af-46f7-8a80-2251c7aad093] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 23:28:19.564072 4010377 system_pods.go:89] "storage-provisioner" [77221557-03fd-4a34-aa8b-c096521c83e3] Running
	I1218 23:28:19.564079 4010377 system_pods.go:126] duration metric: took 207.06165ms to wait for k8s-apps to be running ...
	I1218 23:28:19.564090 4010377 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:28:19.564147 4010377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:28:19.579290 4010377 system_svc.go:56] duration metric: took 15.192163ms WaitForService to wait for kubelet.
	I1218 23:28:19.579317 4010377 kubeadm.go:581] duration metric: took 42.066873795s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:28:19.579337 4010377 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:28:19.661941 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:19.757961 4010377 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:28:19.758043 4010377 node_conditions.go:123] node cpu capacity is 2
	I1218 23:28:19.758083 4010377 node_conditions.go:105] duration metric: took 178.739307ms to run NodePressure ...
	I1218 23:28:19.758115 4010377 start.go:228] waiting for startup goroutines ...
	I1218 23:28:19.925551 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:19.926434 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:20.028747 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:20.141178 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:20.419471 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:20.420212 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:20.527869 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:20.641293 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:20.916308 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:20.918392 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:21.029887 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:21.146689 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:21.417290 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:21.418463 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:21.528667 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:21.641269 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:21.917506 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:21.918362 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:22.028440 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:22.148032 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:22.417500 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:22.419890 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:22.528615 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:22.641444 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:22.917953 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:22.919776 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:23.028678 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:23.150182 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:23.418621 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:23.420761 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:23.529705 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:23.641829 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:23.917836 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:23.917907 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:24.035435 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:24.146386 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:24.419908 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:24.421179 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:24.528147 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:24.641948 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:24.917381 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:24.918978 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:25.030203 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:25.145261 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:25.419632 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:25.422431 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:25.529304 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:25.642261 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:25.919290 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:25.920205 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:26.028685 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:26.143487 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:26.419750 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:26.420713 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:26.528295 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:26.640924 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:26.916838 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:26.918324 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:27.028575 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:27.140826 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:27.417218 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:27.418109 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:27.528692 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:27.640038 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:27.917107 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:27.918879 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:28.029456 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:28.142593 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:28.418670 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:28.420047 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:28.529071 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:28.641364 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:28.918183 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:28.919292 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:29.028172 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:29.141279 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:29.418753 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:29.420543 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:29.528536 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:29.644060 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:29.919221 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:29.920139 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:30.032264 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:30.151059 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:30.417829 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:30.418652 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:30.529447 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:30.641023 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:30.917460 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:30.918325 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:31.028012 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:31.146041 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:31.417361 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:31.417737 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:31.528516 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:31.641217 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:31.916770 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:31.917932 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:32.028614 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:32.141537 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:32.415965 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:32.417106 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:32.528622 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:32.640591 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:32.917691 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:32.918013 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:33.028554 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:33.146481 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:33.419059 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:33.419615 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:33.528947 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:33.640765 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:33.917351 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:33.918400 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:34.028341 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:34.143527 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:34.416714 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:34.418742 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:34.529297 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:34.641151 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:34.920011 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:34.920933 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:35.029182 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:28:35.147388 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:35.418337 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:35.419190 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:35.528119 4010377 kapi.go:107] duration metric: took 47.503859123s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1218 23:28:35.534772 4010377 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-505406 cluster.
	I1218 23:28:35.536840 4010377 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1218 23:28:35.538585 4010377 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1218 23:28:35.640183 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:35.917293 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:35.918368 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:36.144239 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:36.423560 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:36.425011 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:36.641768 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:36.919091 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:36.920691 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:37.141297 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:37.419043 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:37.421412 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:37.641728 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:37.919608 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:37.920706 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:38.146008 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:38.420037 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:38.421074 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:38.641366 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:38.920681 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:28:38.921700 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:39.143924 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:39.418713 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:39.419219 4010377 kapi.go:107] duration metric: took 54.510939992s to wait for kubernetes.io/minikube-addons=registry ...
	I1218 23:28:39.640723 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:39.918452 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:40.143117 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:40.416664 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:40.641038 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:40.923054 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:41.142838 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:41.418375 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:41.641395 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:41.917112 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:42.142886 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:42.417178 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:42.640984 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:42.924238 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:43.141547 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:43.416902 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:43.641986 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:43.920934 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:44.140517 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:44.417449 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:44.641097 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:44.917399 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:45.151532 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:45.417978 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:45.641417 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:45.916808 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:46.141248 4010377 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:28:46.417244 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:46.641044 4010377 kapi.go:107] duration metric: took 1m0.006367208s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1218 23:28:46.917504 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:47.417245 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:47.916452 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:48.417287 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:48.917347 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:49.417109 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:49.917002 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:50.416505 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:50.917615 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:51.417468 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:51.916611 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:52.417790 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:52.922228 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:53.416709 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:53.918275 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:54.416710 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:54.917305 4010377 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:28:55.438920 4010377 kapi.go:107] duration metric: took 1m10.526638521s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1218 23:28:55.441937 4010377 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, gcp-auth, registry, csi-hostpath-driver, ingress
	I1218 23:28:55.444445 4010377 addons.go:502] enable addons completed in 1m18.501326331s: enabled=[nvidia-device-plugin storage-provisioner-rancher cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server default-storageclass volumesnapshots gcp-auth registry csi-hostpath-driver ingress]
	I1218 23:28:55.444563 4010377 start.go:233] waiting for cluster config update ...
	I1218 23:28:55.444624 4010377 start.go:242] writing updated cluster config ...
	I1218 23:28:55.445416 4010377 ssh_runner.go:195] Run: rm -f paused
	I1218 23:28:55.797115 4010377 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 23:28:55.801349 4010377 out.go:177] * Done! kubectl is now configured to use "addons-505406" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	4f46fe23b52fb       dd1b12fcb6097       3 seconds ago        Exited              hello-world-app                          2                   197b9f76b8cb0       hello-world-app-5d77478584-tt9xh
	591728da1e65e       f09fc93534f6a       28 seconds ago       Running             nginx                                    0                   8410070f74e50       nginx
	018b0c429c3a3       f065bfef03d73       About a minute ago   Exited              controller                               0                   469d620b1092d       ingress-nginx-controller-7c6974c4d8-mhg4r
	3f36a2465d9bf       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	8b06dca5aa5ef       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	880dfea0fb2e3       922312104da8a       About a minute ago   Running             liveness-probe                           0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	58e7b38faf296       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	92d3f49d8c601       2a5f29343eb03       About a minute ago   Running             gcp-auth                                 0                   453cb95cb5e3c       gcp-auth-d4c87556c-qvvtm
	ef395bc49bb60       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   8164f33da46b4       local-path-provisioner-78b46b4d5c-7w9vs
	13ce277a8e71c       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	4294ca04a895f       a8df1f5260cb4       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   1e09da3ac2ec3       nvidia-device-plugin-daemonset-sr2zs
	534aae4930e00       f7be1a5e72885       About a minute ago   Running             cloud-spanner-emulator                   0                   fa6cb6b4355d4       cloud-spanner-emulator-5649c69bf6-j6mj7
	1f95b97d17df8       af594c6a879f2       About a minute ago   Exited              patch                                    0                   9221e3de7fe0b       ingress-nginx-admission-patch-7mb7n
	af2154afe0636       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   a3ea3a21f56a1       csi-hostpath-attacher-0
	a4ab8d7cb7a8b       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   67782eadf843c       csi-hostpath-resizer-0
	a12797d4d7fc0       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   ca3107cd4fc09       csi-hostpathplugin-kwqtb
	c9dd70f36c4eb       97e04611ad434       About a minute ago   Running             coredns                                  0                   9816b1470eba3       coredns-5dd5756b68-gz5tv
	6a63806b689b1       af594c6a879f2       About a minute ago   Exited              create                                   0                   ee9696273f635       ingress-nginx-admission-create-j9q4d
	ce090beec9f33       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   d67f9ace944b1       storage-provisioner
	ed6d8cd967f07       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                              0                   fd4daecb30762       kindnet-ktkh2
	20a4491161c86       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   fda768cae677f       kube-proxy-w7pxw
	cc31a5ffc3f26       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   b5734a513bd61       kube-apiserver-addons-505406
	111b78b6df78f       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   8fef85fbc4d9c       etcd-addons-505406
	c28d2856689f9       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   31e9c8039dfa0       kube-scheduler-addons-505406
	80dcec64bc94c       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   241a1d47cc166       kube-controller-manager-addons-505406
	
	* 
	* ==> containerd <==
	* Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.242766806Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:29:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9164 runtime=io.containerd.runc.v2\n"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.254055686Z" level=info msg="TearDown network for sandbox \"089a9d52b86022c55fed683b23089e138b8454dc4c18c7bd7f4a57b1ef1db23c\" successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.254103136Z" level=info msg="StopPodSandbox for \"089a9d52b86022c55fed683b23089e138b8454dc4c18c7bd7f4a57b1ef1db23c\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.278828812Z" level=info msg="TearDown network for sandbox \"1648aa7d62a31dfea7cbc6edb620d9363562b2753399a33739b3d0c96d56f551\" successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.278890686Z" level=info msg="StopPodSandbox for \"1648aa7d62a31dfea7cbc6edb620d9363562b2753399a33739b3d0c96d56f551\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.717489998Z" level=info msg="RemoveContainer for \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\""
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.737086693Z" level=info msg="RemoveContainer for \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.744299294Z" level=error msg="ContainerStatus for \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": not found"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.749549005Z" level=info msg="RemoveContainer for \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\""
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.769588321Z" level=info msg="RemoveContainer for \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\" returns successfully"
	Dec 18 23:29:59 addons-505406 containerd[744]: time="2023-12-18T23:29:59.770437456Z" level=error msg="ContainerStatus for \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": not found"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.667170466Z" level=info msg="Kill container \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\""
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.746117789Z" level=info msg="shim disconnected" id=018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.746634331Z" level=warning msg="cleaning up after shim disconnected" id=018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae namespace=k8s.io
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.746711778Z" level=info msg="cleaning up dead shim"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.762655108Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:30:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9332 runtime=io.containerd.runc.v2\n"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.767555184Z" level=info msg="StopContainer for \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\" returns successfully"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.768531817Z" level=info msg="StopPodSandbox for \"469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9\""
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.768685243Z" level=info msg="Container to stop \"018b0c429c3a32ca2e6e4be42370a4da6005d043d35ab73e8442ffc0c2b105ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.841144547Z" level=info msg="shim disconnected" id=469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.841444787Z" level=warning msg="cleaning up after shim disconnected" id=469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9 namespace=k8s.io
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.841555588Z" level=info msg="cleaning up dead shim"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.856198594Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:30:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9372 runtime=io.containerd.runc.v2\n"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.927412007Z" level=info msg="TearDown network for sandbox \"469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9\" successfully"
	Dec 18 23:30:00 addons-505406 containerd[744]: time="2023-12-18T23:30:00.927540309Z" level=info msg="StopPodSandbox for \"469d620b1092d5e3f882ff422fb39a652064459dfed93c0839ad7aa69310bbf9\" returns successfully"
	
	* 
	* ==> coredns [c9dd70f36c4eb293bf4eade6dd5572f67e303fc0d0c67d4be56ace1c5e8f1022] <==
	* [INFO] 10.244.0.19:37892 - 34743 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000283961s
	[INFO] 10.244.0.19:37892 - 19875 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054301s
	[INFO] 10.244.0.19:37892 - 46558 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052767s
	[INFO] 10.244.0.19:41787 - 14547 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000245496s
	[INFO] 10.244.0.19:37892 - 16953 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001215917s
	[INFO] 10.244.0.19:37892 - 9212 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001089593s
	[INFO] 10.244.0.19:37892 - 34763 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075675s
	[INFO] 10.244.0.19:45340 - 59043 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000125628s
	[INFO] 10.244.0.19:45340 - 29604 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109456s
	[INFO] 10.244.0.19:51837 - 42420 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060578s
	[INFO] 10.244.0.19:51837 - 62240 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000072886s
	[INFO] 10.244.0.19:51837 - 6385 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000104467s
	[INFO] 10.244.0.19:45340 - 32328 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000160738s
	[INFO] 10.244.0.19:45340 - 26654 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073419s
	[INFO] 10.244.0.19:51837 - 43712 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035872s
	[INFO] 10.244.0.19:51837 - 44135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073279s
	[INFO] 10.244.0.19:45340 - 39191 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037341s
	[INFO] 10.244.0.19:51837 - 51590 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037826s
	[INFO] 10.244.0.19:45340 - 53668 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042199s
	[INFO] 10.244.0.19:45340 - 23407 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006382757s
	[INFO] 10.244.0.19:51837 - 58671 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006560816s
	[INFO] 10.244.0.19:51837 - 40689 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297968s
	[INFO] 10.244.0.19:45340 - 34305 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001447202s
	[INFO] 10.244.0.19:51837 - 46219 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062416s
	[INFO] 10.244.0.19:45340 - 18090 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048049s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-505406
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-505406
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=addons-505406
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T23_27_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-505406
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-505406"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:27:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-505406
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:29:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:29:57 +0000   Mon, 18 Dec 2023 23:27:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-505406
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bdf59c5d75f4f8bb2d2e90b60e7fd8e
	  System UUID:                102d1fd6-2ff2-4b64-8ff3-ed26f256c4f7
	  Boot ID:                    890256b0-dbd9-440c-9da4-c1f4e1d4cc44
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-j6mj7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  default                     hello-world-app-5d77478584-tt9xh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  gcp-auth                    gcp-auth-d4c87556c-qvvtm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 coredns-5dd5756b68-gz5tv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m24s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 csi-hostpathplugin-kwqtb                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 etcd-addons-505406                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-ktkh2                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m24s
	  kube-system                 kube-apiserver-addons-505406               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-controller-manager-addons-505406      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-proxy-w7pxw                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-scheduler-addons-505406               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 nvidia-device-plugin-daemonset-sr2zs       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  local-path-storage          local-path-provisioner-78b46b4d5c-7w9vs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m22s  kube-proxy       
	  Normal  Starting                 2m37s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m37s  kubelet          Node addons-505406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s  kubelet          Node addons-505406 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s  kubelet          Node addons-505406 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m37s  kubelet          Node addons-505406 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m27s  kubelet          Node addons-505406 status is now: NodeReady
	  Normal  RegisteredNode           2m25s  node-controller  Node addons-505406 event: Registered Node addons-505406 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001125] FS-Cache: O-key=[8] '246e5c0100000000'
	[  +0.000829] FS-Cache: N-cookie c=0000023a [p=00000231 fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=0000000075b7cdeb
	[  +0.001111] FS-Cache: N-key=[8] '246e5c0100000000'
	[  +0.003808] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=00000234 [p=00000231 fl=226 nc=0 na=1]
	[  +0.001070] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=0000000030d65e10
	[  +0.001118] FS-Cache: O-key=[8] '246e5c0100000000'
	[  +0.000761] FS-Cache: N-cookie c=0000023b [p=00000231 fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000f67add3f
	[  +0.001180] FS-Cache: N-key=[8] '246e5c0100000000'
	[  +2.759454] FS-Cache: Duplicate cookie detected
	[  +0.000817] FS-Cache: O-cookie c=00000232 [p=00000231 fl=226 nc=0 na=1]
	[  +0.001047] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=000000000462ecb2
	[  +0.001127] FS-Cache: O-key=[8] '236e5c0100000000'
	[  +0.000764] FS-Cache: N-cookie c=0000023d [p=00000231 fl=2 nc=0 na=1]
	[  +0.001132] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000fe66079f
	[  +0.001124] FS-Cache: N-key=[8] '236e5c0100000000'
	[  +0.425127] FS-Cache: Duplicate cookie detected
	[  +0.000854] FS-Cache: O-cookie c=00000237 [p=00000231 fl=226 nc=0 na=1]
	[  +0.001164] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=0000000072235349
	[  +0.001236] FS-Cache: O-key=[8] '296e5c0100000000'
	[  +0.000804] FS-Cache: N-cookie c=0000023e [p=00000231 fl=2 nc=0 na=1]
	[  +0.001165] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000d61c5791
	[  +0.001214] FS-Cache: N-key=[8] '296e5c0100000000'
	
	* 
	* ==> etcd [111b78b6df78f7ec01d94768a4f407a45b63b80becff52634ea0adf94d8d9d54] <==
	* {"level":"info","ts":"2023-12-18T23:27:16.86739Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:27:16.867414Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:27:16.867423Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:27:16.867911Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-18T23:27:16.867926Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-18T23:27:16.871087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-18T23:27:16.871218Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-18T23:27:17.348929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-18T23:27:17.349037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-18T23:27:17.349086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-18T23:27:17.349158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.349192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.349235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.349276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-18T23:27:17.353086Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-505406 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T23:27:17.353221Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.353301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T23:27:17.36155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-18T23:27:17.364973Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.368981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.369149Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:27:17.353311Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T23:27:17.370353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-18T23:27:17.404913Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T23:27:17.40513Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [92d3f49d8c601e145cf0e168296af02725d3a93573812ef204b5b4a7156ccad7] <==
	* 2023/12/18 23:28:35 GCP Auth Webhook started!
	2023/12/18 23:29:07 Ready to marshal response ...
	2023/12/18 23:29:07 Ready to write response ...
	2023/12/18 23:29:18 Ready to marshal response ...
	2023/12/18 23:29:18 Ready to write response ...
	2023/12/18 23:29:30 Ready to marshal response ...
	2023/12/18 23:29:30 Ready to write response ...
	2023/12/18 23:29:39 Ready to marshal response ...
	2023/12/18 23:29:39 Ready to write response ...
	2023/12/18 23:29:48 Ready to marshal response ...
	2023/12/18 23:29:48 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:30:01 up 2 days,  7:12,  0 users,  load average: 2.30, 2.59, 2.57
	Linux addons-505406 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [ed6d8cd967f0788ecabbfc41d2adba3d1ff1687ade20dae72c72314d78adfc7a] <==
	* I1218 23:28:08.835461       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1218 23:28:08.849433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:08.849464       1 main.go:227] handling current node
	I1218 23:28:18.864407       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:18.864434       1 main.go:227] handling current node
	I1218 23:28:28.876442       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:28.876468       1 main.go:227] handling current node
	I1218 23:28:38.880726       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:38.880754       1 main.go:227] handling current node
	I1218 23:28:48.887705       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:48.887738       1 main.go:227] handling current node
	I1218 23:28:58.895601       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:28:58.895633       1 main.go:227] handling current node
	I1218 23:29:08.899916       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:08.899944       1 main.go:227] handling current node
	I1218 23:29:18.912616       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:18.912641       1 main.go:227] handling current node
	I1218 23:29:28.925226       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:28.925255       1 main.go:227] handling current node
	I1218 23:29:38.929882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:38.929913       1 main.go:227] handling current node
	I1218 23:29:48.940926       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:48.940955       1 main.go:227] handling current node
	I1218 23:29:58.956484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:29:58.956520       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [cc31a5ffc3f26172d2d2c55c47d28afe9d74094af496c29e4be838e78246b10a] <==
	* I1218 23:29:24.401020       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W1218 23:29:25.320083       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1218 23:29:28.523926       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1218 23:29:29.983163       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1218 23:29:30.363464       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.55.182"}
	I1218 23:29:40.293578       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.192.223"}
	I1218 23:29:58.821753       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.821794       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.837077       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.837129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.847093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.847499       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.864132       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.865818       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.883681       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.883735       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.886235       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.886293       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.905902       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.906823       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:29:58.914978       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:29:58.915022       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1218 23:29:59.847211       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1218 23:29:59.916233       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1218 23:29:59.933006       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [80dcec64bc94c4be300129c5f8f1df06353f3aa7b5ca03eafd1d8bc55821b494] <==
	* I1218 23:29:39.915907       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-tt9xh"
	I1218 23:29:39.941114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.656908ms"
	I1218 23:29:39.957404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.228504ms"
	I1218 23:29:39.969277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.821061ms"
	I1218 23:29:39.969366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.33µs"
	I1218 23:29:42.588051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.342µs"
	I1218 23:29:43.605638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.449µs"
	I1218 23:29:44.596029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.03µs"
	W1218 23:29:45.637092       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:29:45.637126       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 23:29:47.659668       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 23:29:57.633241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.792µs"
	I1218 23:29:57.641998       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1218 23:29:57.688677       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1218 23:29:58.709224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.084µs"
	I1218 23:29:58.961198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.047µs"
	E1218 23:29:59.849297       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:29:59.922118       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:29:59.935487       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:00.852650       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:00.852685       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:01.169971       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:01.170016       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:30:01.279134       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:30:01.279172       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [20a4491161c86aaa7542b651c6cf9ac91f2212222ba854f55cdb2a7528c7d1f3] <==
	* I1218 23:27:38.418683       1 server_others.go:69] "Using iptables proxy"
	I1218 23:27:38.451076       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1218 23:27:38.541756       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1218 23:27:38.544192       1 server_others.go:152] "Using iptables Proxier"
	I1218 23:27:38.544232       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1218 23:27:38.544242       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1218 23:27:38.544302       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 23:27:38.544539       1 server.go:846] "Version info" version="v1.28.4"
	I1218 23:27:38.544555       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 23:27:38.546005       1 config.go:188] "Starting service config controller"
	I1218 23:27:38.546057       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 23:27:38.546109       1 config.go:97] "Starting endpoint slice config controller"
	I1218 23:27:38.546115       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 23:27:38.548305       1 config.go:315] "Starting node config controller"
	I1218 23:27:38.548330       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 23:27:38.647729       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 23:27:38.647780       1 shared_informer.go:318] Caches are synced for service config
	I1218 23:27:38.649726       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c28d2856689f9f3f41a3169203f6ffaa98b4c51d37101e318368fbcb2c57cd8a] <==
	* W1218 23:27:21.946587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 23:27:21.946871       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.947039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 23:27:21.947190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 23:27:21.947376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 23:27:21.947485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 23:27:21.947724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 23:27:21.947861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 23:27:21.948063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.948179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 23:27:21.948213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1218 23:27:21.948267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 23:27:21.948563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 23:27:21.948951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1218 23:27:21.948985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.949124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 23:27:21.949145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.949192       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 23:27:21.949211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1218 23:27:21.949279       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 23:27:21.949296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 23:27:21.948196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 23:27:21.949775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 23:27:21.949801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1218 23:27:23.335795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 18 23:29:58 addons-505406 kubelet[1339]: I1218 23:29:58.695852    1339 scope.go:117] "RemoveContainer" containerID="9051ae5546265880237621a32255cec72246c0ae3fb256724631c56f03344999"
	Dec 18 23:29:58 addons-505406 kubelet[1339]: I1218 23:29:58.696521    1339 scope.go:117] "RemoveContainer" containerID="4f46fe23b52fb63657fee268eda5790be64b6e7858b475c41375166a636cb2b4"
	Dec 18 23:29:58 addons-505406 kubelet[1339]: E1218 23:29:58.696799    1339 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-tt9xh_default(c0b9ae9f-eabd-4744-aa36-cc322a509639)\"" pod="default/hello-world-app-5d77478584-tt9xh" podUID="c0b9ae9f-eabd-4744-aa36-cc322a509639"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.365443    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vklzj\" (UniqueName: \"kubernetes.io/projected/981d723e-25af-46f7-8a80-2251c7aad093-kube-api-access-vklzj\") pod \"981d723e-25af-46f7-8a80-2251c7aad093\" (UID: \"981d723e-25af-46f7-8a80-2251c7aad093\") "
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.366075    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8cnd\" (UniqueName: \"kubernetes.io/projected/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a-kube-api-access-r8cnd\") pod \"ec8f8883-b1e3-4610-9fd1-e0eafac8e50a\" (UID: \"ec8f8883-b1e3-4610-9fd1-e0eafac8e50a\") "
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.368271    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981d723e-25af-46f7-8a80-2251c7aad093-kube-api-access-vklzj" (OuterVolumeSpecName: "kube-api-access-vklzj") pod "981d723e-25af-46f7-8a80-2251c7aad093" (UID: "981d723e-25af-46f7-8a80-2251c7aad093"). InnerVolumeSpecName "kube-api-access-vklzj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.369980    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a-kube-api-access-r8cnd" (OuterVolumeSpecName: "kube-api-access-r8cnd") pod "ec8f8883-b1e3-4610-9fd1-e0eafac8e50a" (UID: "ec8f8883-b1e3-4610-9fd1-e0eafac8e50a"). InnerVolumeSpecName "kube-api-access-r8cnd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.467144    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vklzj\" (UniqueName: \"kubernetes.io/projected/981d723e-25af-46f7-8a80-2251c7aad093-kube-api-access-vklzj\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.467200    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r8cnd\" (UniqueName: \"kubernetes.io/projected/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a-kube-api-access-r8cnd\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.699733    1339 scope.go:117] "RemoveContainer" containerID="fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.743275    1339 scope.go:117] "RemoveContainer" containerID="fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: E1218 23:29:59.744723    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": not found" containerID="fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.744773    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40"} err="failed to get container status \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe63150344b911734bef1f14daa54a651fa33ded5ecd3ddab4ff68043fde9e40\": not found"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.748034    1339 scope.go:117] "RemoveContainer" containerID="4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.770087    1339 scope.go:117] "RemoveContainer" containerID="4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: E1218 23:29:59.771667    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": not found" containerID="4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"
	Dec 18 23:29:59 addons-505406 kubelet[1339]: I1218 23:29:59.771773    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8"} err="failed to get container status \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4789e64a73b50db15aed2d58e50ab62d4ec2ddeef46b59affedbb36a4c8e3ea8\": not found"
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.607119    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="981d723e-25af-46f7-8a80-2251c7aad093" path="/var/lib/kubelet/pods/981d723e-25af-46f7-8a80-2251c7aad093/volumes"
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.607604    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec8f8883-b1e3-4610-9fd1-e0eafac8e50a" path="/var/lib/kubelet/pods/ec8f8883-b1e3-4610-9fd1-e0eafac8e50a/volumes"
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.992172    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwwd\" (UniqueName: \"kubernetes.io/projected/cad694e6-d708-4710-b8d9-61731db55c47-kube-api-access-zbwwd\") pod \"cad694e6-d708-4710-b8d9-61731db55c47\" (UID: \"cad694e6-d708-4710-b8d9-61731db55c47\") "
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.992733    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cad694e6-d708-4710-b8d9-61731db55c47-webhook-cert\") pod \"cad694e6-d708-4710-b8d9-61731db55c47\" (UID: \"cad694e6-d708-4710-b8d9-61731db55c47\") "
	Dec 18 23:30:00 addons-505406 kubelet[1339]: I1218 23:30:00.995313    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad694e6-d708-4710-b8d9-61731db55c47-kube-api-access-zbwwd" (OuterVolumeSpecName: "kube-api-access-zbwwd") pod "cad694e6-d708-4710-b8d9-61731db55c47" (UID: "cad694e6-d708-4710-b8d9-61731db55c47"). InnerVolumeSpecName "kube-api-access-zbwwd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.001046    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad694e6-d708-4710-b8d9-61731db55c47-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cad694e6-d708-4710-b8d9-61731db55c47" (UID: "cad694e6-d708-4710-b8d9-61731db55c47"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.093180    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zbwwd\" (UniqueName: \"kubernetes.io/projected/cad694e6-d708-4710-b8d9-61731db55c47-kube-api-access-zbwwd\") on node \"addons-505406\" DevicePath \"\""
	Dec 18 23:30:01 addons-505406 kubelet[1339]: I1218 23:30:01.093228    1339 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cad694e6-d708-4710-b8d9-61731db55c47-webhook-cert\") on node \"addons-505406\" DevicePath \"\""
	
	* 
	* ==> storage-provisioner [ce090beec9f33d032454c060633b807f4f48e527381868698dd659568876a342] <==
	* I1218 23:27:44.119851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 23:27:44.170096       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 23:27:44.170169       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 23:27:44.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 23:27:44.183291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-505406_af493d41-53b7-4610-adef-d54045f0af0b!
	I1218 23:27:44.193557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b38b873d-8aa0-4633-a415-96c7f51855eb", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-505406_af493d41-53b7-4610-adef-d54045f0af0b became leader
	I1218 23:27:44.286316       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-505406_af493d41-53b7-4610-adef-d54045f0af0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-505406 -n addons-505406
helpers_test.go:261: (dbg) Run:  kubectl --context addons-505406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (66.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image load --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 image load --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr: (4.376471158s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-773431" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image load --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 image load --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr: (3.247977497s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-773431" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.46995469s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-773431
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image load --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 image load --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr: (3.217193181s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-773431" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image save gcr.io/google-containers/addon-resizer:functional-773431 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1218 23:36:20.828686 4042524 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:36:20.829916 4042524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:36:20.829972 4042524 out.go:309] Setting ErrFile to fd 2...
	I1218 23:36:20.829995 4042524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:36:20.830327 4042524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:36:20.831181 4042524 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:36:20.831390 4042524 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:36:20.832145 4042524 cli_runner.go:164] Run: docker container inspect functional-773431 --format={{.State.Status}}
	I1218 23:36:20.852228 4042524 ssh_runner.go:195] Run: systemctl --version
	I1218 23:36:20.852323 4042524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773431
	I1218 23:36:20.873634 4042524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42686 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/functional-773431/id_rsa Username:docker}
	I1218 23:36:20.975803 4042524 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1218 23:36:20.975887 4042524 cache_images.go:254] Failed to load cached images for profile functional-773431. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1218 23:36:20.975924 4042524 cache_images.go:262] succeeded pushing to: 
	I1218 23:36:20.975952 4042524 cache_images.go:263] failed pushing to: functional-773431

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-909642 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-909642 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.836391071s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-909642 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-909642 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e22ce0d3-2b61-4799-8b9f-d746bc9128db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e22ce0d3-2b61-4799-8b9f-d746bc9128db] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.00287403s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-909642 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021911812s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons disable ingress-dns --alsologtostderr -v=1: (5.626265085s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons disable ingress --alsologtostderr -v=1: (7.615716464s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-909642
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-909642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a38b958c6c6a53c7b2ef46e213e750a3f2ac53865f7da5f86cd43b813fa03b6e",
	        "Created": "2023-12-18T23:36:46.15703027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4043682,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T23:36:46.471355641Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/a38b958c6c6a53c7b2ef46e213e750a3f2ac53865f7da5f86cd43b813fa03b6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a38b958c6c6a53c7b2ef46e213e750a3f2ac53865f7da5f86cd43b813fa03b6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a38b958c6c6a53c7b2ef46e213e750a3f2ac53865f7da5f86cd43b813fa03b6e/hosts",
	        "LogPath": "/var/lib/docker/containers/a38b958c6c6a53c7b2ef46e213e750a3f2ac53865f7da5f86cd43b813fa03b6e/a38b958c6c6a53c7b2ef46e213e750a3f2ac53865f7da5f86cd43b813fa03b6e-json.log",
	        "Name": "/ingress-addon-legacy-909642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-909642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-909642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da707168446cd6bb49ff6fefd7ec3d0a211f845977fc8088c8fd16587ad49376-init/diff:/var/lib/docker/overlay2/348b7bce1eeb3fbac023de8c50816ddfb5fe3d6cead44e087fa78b4f572e0dfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da707168446cd6bb49ff6fefd7ec3d0a211f845977fc8088c8fd16587ad49376/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da707168446cd6bb49ff6fefd7ec3d0a211f845977fc8088c8fd16587ad49376/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da707168446cd6bb49ff6fefd7ec3d0a211f845977fc8088c8fd16587ad49376/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-909642",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-909642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-909642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-909642",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-909642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e87f615522be39dcf73f277152a03204da94cfa90ba53e60d93e47b92c3f3e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42691"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42690"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42687"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42689"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42688"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/37e87f615522",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-909642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a38b958c6c6a",
	                        "ingress-addon-legacy-909642"
	                    ],
	                    "NetworkID": "2b6629d4c73b69072c11ec777b2808f43f99be529a0ad4952e1688dbafd5f533",
	                    "EndpointID": "dc91e15f7fe49f8bd7b4684e14240c59225af5809cf56e462dcf1cc4db541f7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-909642 -n ingress-addon-legacy-909642
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-909642 logs -n 25: (1.446047796s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-773431 image ls                                                   | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	| image   | functional-773431 image load --daemon                                        | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-773431                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431 image ls                                                   | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	| image   | functional-773431 image load --daemon                                        | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-773431                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431 image ls                                                   | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	| image   | functional-773431 image save                                                 | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-773431                     |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431 image rm                                                   | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-773431                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431 image ls                                                   | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	| image   | functional-773431 image load                                                 | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431 image save --daemon                                        | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-773431                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431                                                            | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431                                                            | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431                                                            | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-773431                                                            | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-773431 ssh pgrep                                                  | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-773431 image build -t                                             | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | localhost/my-image:functional-773431                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-773431 image ls                                                   | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	| delete  | -p functional-773431                                                         | functional-773431           | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	| start   | -p ingress-addon-legacy-909642                                               | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:37 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-909642                                                  | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:37 UTC | 18 Dec 23 23:37 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-909642                                                  | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:37 UTC | 18 Dec 23 23:37 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-909642                                                  | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:38 UTC | 18 Dec 23 23:38 UTC |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-909642 ip                                               | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:38 UTC | 18 Dec 23 23:38 UTC |
	| addons  | ingress-addon-legacy-909642                                                  | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:38 UTC | 18 Dec 23 23:38 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-909642                                                  | ingress-addon-legacy-909642 | jenkins | v1.32.0 | 18 Dec 23 23:38 UTC | 18 Dec 23 23:38 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:36:28
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:36:28.169219 4043226 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:36:28.169391 4043226 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:36:28.169403 4043226 out.go:309] Setting ErrFile to fd 2...
	I1218 23:36:28.169420 4043226 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:36:28.169689 4043226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:36:28.170150 4043226 out.go:303] Setting JSON to false
	I1218 23:36:28.171067 4043226 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":199132,"bootTime":1702743457,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:36:28.171142 4043226 start.go:138] virtualization:  
	I1218 23:36:28.174701 4043226 out.go:177] * [ingress-addon-legacy-909642] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:36:28.178436 4043226 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:36:28.180943 4043226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:36:28.178626 4043226 notify.go:220] Checking for updates...
	I1218 23:36:28.185799 4043226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:36:28.188564 4043226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:36:28.190951 4043226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:36:28.193081 4043226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:36:28.195562 4043226 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:36:28.224972 4043226 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:36:28.225109 4043226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:36:28.310650 4043226 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-18 23:36:28.300059329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:36:28.310760 4043226 docker.go:295] overlay module found
	I1218 23:36:28.313173 4043226 out.go:177] * Using the docker driver based on user configuration
	I1218 23:36:28.315599 4043226 start.go:298] selected driver: docker
	I1218 23:36:28.315623 4043226 start.go:902] validating driver "docker" against <nil>
	I1218 23:36:28.315637 4043226 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:36:28.316294 4043226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:36:28.393395 4043226 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-18 23:36:28.38307238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:36:28.393569 4043226 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:36:28.393802 4043226 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 23:36:28.396506 4043226 out.go:177] * Using Docker driver with root privileges
	I1218 23:36:28.399190 4043226 cni.go:84] Creating CNI manager for ""
	I1218 23:36:28.399218 4043226 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:36:28.399233 4043226 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:36:28.399245 4043226 start_flags.go:323] config:
	{Name:ingress-addon-legacy-909642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-909642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:36:28.404452 4043226 out.go:177] * Starting control plane node ingress-addon-legacy-909642 in cluster ingress-addon-legacy-909642
	I1218 23:36:28.406952 4043226 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1218 23:36:28.409139 4043226 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:36:28.411327 4043226 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1218 23:36:28.411520 4043226 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:36:28.429326 4043226 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1218 23:36:28.429349 4043226 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1218 23:36:28.484323 4043226 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1218 23:36:28.484348 4043226 cache.go:56] Caching tarball of preloaded images
	I1218 23:36:28.484503 4043226 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1218 23:36:28.487479 4043226 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1218 23:36:28.490106 4043226 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:36:28.603624 4043226 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1218 23:36:38.263389 4043226 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:36:38.264622 4043226 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:36:39.470465 4043226 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1218 23:36:39.470843 4043226 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/config.json ...
	I1218 23:36:39.470879 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/config.json: {Name:mkf5cf9024d0bcdf079d02338d00fdb783cb69f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:36:39.471101 4043226 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:36:39.471158 4043226 start.go:365] acquiring machines lock for ingress-addon-legacy-909642: {Name:mkaa35ec82370ddc3d25174f0ff30ad6808b6b8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:36:39.471219 4043226 start.go:369] acquired machines lock for "ingress-addon-legacy-909642" in 42.83µs
	I1218 23:36:39.471247 4043226 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-909642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-909642 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 23:36:39.471322 4043226 start.go:125] createHost starting for "" (driver="docker")
	I1218 23:36:39.474026 4043226 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1218 23:36:39.474262 4043226 start.go:159] libmachine.API.Create for "ingress-addon-legacy-909642" (driver="docker")
	I1218 23:36:39.474292 4043226 client.go:168] LocalClient.Create starting
	I1218 23:36:39.474382 4043226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem
	I1218 23:36:39.474426 4043226 main.go:141] libmachine: Decoding PEM data...
	I1218 23:36:39.474445 4043226 main.go:141] libmachine: Parsing certificate...
	I1218 23:36:39.474504 4043226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem
	I1218 23:36:39.474526 4043226 main.go:141] libmachine: Decoding PEM data...
	I1218 23:36:39.474544 4043226 main.go:141] libmachine: Parsing certificate...
	I1218 23:36:39.474931 4043226 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-909642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 23:36:39.493787 4043226 cli_runner.go:211] docker network inspect ingress-addon-legacy-909642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 23:36:39.493868 4043226 network_create.go:281] running [docker network inspect ingress-addon-legacy-909642] to gather additional debugging logs...
	I1218 23:36:39.493889 4043226 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-909642
	W1218 23:36:39.511131 4043226 cli_runner.go:211] docker network inspect ingress-addon-legacy-909642 returned with exit code 1
	I1218 23:36:39.511180 4043226 network_create.go:284] error running [docker network inspect ingress-addon-legacy-909642]: docker network inspect ingress-addon-legacy-909642: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-909642 not found
	I1218 23:36:39.511199 4043226 network_create.go:286] output of [docker network inspect ingress-addon-legacy-909642]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-909642 not found
	
	** /stderr **
	I1218 23:36:39.511299 4043226 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:36:39.531438 4043226 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40020f81a0}
	I1218 23:36:39.531475 4043226 network_create.go:124] attempt to create docker network ingress-addon-legacy-909642 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 23:36:39.531536 4043226 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-909642 ingress-addon-legacy-909642
	I1218 23:36:39.604284 4043226 network_create.go:108] docker network ingress-addon-legacy-909642 192.168.49.0/24 created
	I1218 23:36:39.604315 4043226 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-909642" container
	I1218 23:36:39.604389 4043226 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:36:39.621432 4043226 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-909642 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-909642 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:36:39.640692 4043226 oci.go:103] Successfully created a docker volume ingress-addon-legacy-909642
	I1218 23:36:39.640789 4043226 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-909642-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-909642 --entrypoint /usr/bin/test -v ingress-addon-legacy-909642:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:36:41.163353 4043226 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-909642-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-909642 --entrypoint /usr/bin/test -v ingress-addon-legacy-909642:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.522521256s)
	I1218 23:36:41.163387 4043226 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-909642
	I1218 23:36:41.163405 4043226 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1218 23:36:41.163424 4043226 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:36:41.163508 4043226 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-909642:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:36:46.067912 4043226 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-909642:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.904359799s)
	I1218 23:36:46.067949 4043226 kic.go:203] duration metric: took 4.904519 seconds to extract preloaded images to volume
	W1218 23:36:46.068112 4043226 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:36:46.068234 4043226 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:36:46.141055 4043226 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-909642 --name ingress-addon-legacy-909642 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-909642 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-909642 --network ingress-addon-legacy-909642 --ip 192.168.49.2 --volume ingress-addon-legacy-909642:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:36:46.479456 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Running}}
	I1218 23:36:46.507484 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Status}}
	I1218 23:36:46.527126 4043226 cli_runner.go:164] Run: docker exec ingress-addon-legacy-909642 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:36:46.595741 4043226 oci.go:144] the created container "ingress-addon-legacy-909642" has a running status.
	I1218 23:36:46.595767 4043226 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa...
	I1218 23:36:46.952176 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1218 23:36:46.952296 4043226 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:36:46.990626 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Status}}
	I1218 23:36:47.023099 4043226 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:36:47.023122 4043226 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-909642 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:36:47.118086 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Status}}
	I1218 23:36:47.149085 4043226 machine.go:88] provisioning docker machine ...
	I1218 23:36:47.149116 4043226 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-909642"
	I1218 23:36:47.149180 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:47.193044 4043226 main.go:141] libmachine: Using SSH client type: native
	I1218 23:36:47.193485 4043226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 42691 <nil> <nil>}
	I1218 23:36:47.193498 4043226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-909642 && echo "ingress-addon-legacy-909642" | sudo tee /etc/hostname
	I1218 23:36:47.194134 4043226 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 23:36:50.360999 4043226 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-909642
	
	I1218 23:36:50.361091 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:50.380851 4043226 main.go:141] libmachine: Using SSH client type: native
	I1218 23:36:50.381387 4043226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 42691 <nil> <nil>}
	I1218 23:36:50.381414 4043226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-909642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-909642/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-909642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:36:50.534139 4043226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:36:50.534174 4043226 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-4004447/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-4004447/.minikube}
	I1218 23:36:50.534193 4043226 ubuntu.go:177] setting up certificates
	I1218 23:36:50.534203 4043226 provision.go:83] configureAuth start
	I1218 23:36:50.534265 4043226 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-909642
	I1218 23:36:50.551514 4043226 provision.go:138] copyHostCerts
	I1218 23:36:50.551559 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.pem
	I1218 23:36:50.551591 4043226 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.pem, removing ...
	I1218 23:36:50.551603 4043226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.pem
	I1218 23:36:50.551680 4043226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.pem (1082 bytes)
	I1218 23:36:50.551764 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cert.pem
	I1218 23:36:50.551786 4043226 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-4004447/.minikube/cert.pem, removing ...
	I1218 23:36:50.551790 4043226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-4004447/.minikube/cert.pem
	I1218 23:36:50.551824 4043226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/cert.pem (1123 bytes)
	I1218 23:36:50.551884 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17822-4004447/.minikube/key.pem
	I1218 23:36:50.551907 4043226 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-4004447/.minikube/key.pem, removing ...
	I1218 23:36:50.551914 4043226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-4004447/.minikube/key.pem
	I1218 23:36:50.551938 4043226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-4004447/.minikube/key.pem (1675 bytes)
	I1218 23:36:50.551987 4043226 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-909642 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-909642]
	I1218 23:36:51.005629 4043226 provision.go:172] copyRemoteCerts
	I1218 23:36:51.005715 4043226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:36:51.005765 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:51.025579 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:36:51.132452 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 23:36:51.132522 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 23:36:51.163756 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 23:36:51.163836 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1218 23:36:51.193222 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 23:36:51.193287 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 23:36:51.221946 4043226 provision.go:86] duration metric: configureAuth took 687.727313ms
	I1218 23:36:51.221973 4043226 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:36:51.222174 4043226 config.go:182] Loaded profile config "ingress-addon-legacy-909642": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1218 23:36:51.222186 4043226 machine.go:91] provisioned docker machine in 4.073081745s
	I1218 23:36:51.222193 4043226 client.go:171] LocalClient.Create took 11.747892239s
	I1218 23:36:51.222211 4043226 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-909642" took 11.747948247s
	I1218 23:36:51.222227 4043226 start.go:300] post-start starting for "ingress-addon-legacy-909642" (driver="docker")
	I1218 23:36:51.222238 4043226 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:36:51.222291 4043226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:36:51.222338 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:51.239668 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:36:51.343694 4043226 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:36:51.347752 4043226 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:36:51.347801 4043226 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:36:51.347817 4043226 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:36:51.347826 4043226 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:36:51.347837 4043226 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-4004447/.minikube/addons for local assets ...
	I1218 23:36:51.347900 4043226 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-4004447/.minikube/files for local assets ...
	I1218 23:36:51.347983 4043226 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/40097792.pem -> 40097792.pem in /etc/ssl/certs
	I1218 23:36:51.347994 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/40097792.pem -> /etc/ssl/certs/40097792.pem
	I1218 23:36:51.348102 4043226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 23:36:51.358343 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/40097792.pem --> /etc/ssl/certs/40097792.pem (1708 bytes)
	I1218 23:36:51.386233 4043226 start.go:303] post-start completed in 163.984188ms
	I1218 23:36:51.386646 4043226 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-909642
	I1218 23:36:51.404373 4043226 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/config.json ...
	I1218 23:36:51.404655 4043226 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:36:51.404708 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:51.422169 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:36:51.523041 4043226 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:36:51.528767 4043226 start.go:128] duration metric: createHost completed in 12.057430214s
	I1218 23:36:51.528793 4043226 start.go:83] releasing machines lock for "ingress-addon-legacy-909642", held for 12.05755891s
	I1218 23:36:51.528888 4043226 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-909642
	I1218 23:36:51.546183 4043226 ssh_runner.go:195] Run: cat /version.json
	I1218 23:36:51.546238 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:51.546286 4043226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:36:51.546343 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:36:51.573767 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:36:51.573767 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:36:51.673766 4043226 ssh_runner.go:195] Run: systemctl --version
	I1218 23:36:51.810725 4043226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:36:51.816393 4043226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1218 23:36:51.846142 4043226 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:36:51.846228 4043226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:36:51.879903 4043226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:36:51.879973 4043226 start.go:475] detecting cgroup driver to use...
	I1218 23:36:51.880020 4043226 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:36:51.880096 4043226 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 23:36:51.895832 4043226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 23:36:51.909540 4043226 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:36:51.909637 4043226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:36:51.925256 4043226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:36:51.941912 4043226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:36:52.048136 4043226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:36:52.158408 4043226 docker.go:219] disabling docker service ...
	I1218 23:36:52.158518 4043226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:36:52.180390 4043226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:36:52.194356 4043226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:36:52.301088 4043226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:36:52.401531 4043226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:36:52.415554 4043226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:36:52.435600 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1218 23:36:52.448007 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 23:36:52.460352 4043226 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 23:36:52.460475 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 23:36:52.472577 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 23:36:52.484740 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 23:36:52.497827 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 23:36:52.509620 4043226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:36:52.521141 4043226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 23:36:52.533161 4043226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:36:52.543793 4043226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:36:52.554402 4043226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:36:52.645473 4043226 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 23:36:52.782033 4043226 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 23:36:52.782151 4043226 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 23:36:52.786973 4043226 start.go:543] Will wait 60s for crictl version
	I1218 23:36:52.787087 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:52.791747 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:36:52.836042 4043226 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I1218 23:36:52.836156 4043226 ssh_runner.go:195] Run: containerd --version
	I1218 23:36:52.865044 4043226 ssh_runner.go:195] Run: containerd --version
	I1218 23:36:52.896033 4043226 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.26 ...
	I1218 23:36:52.898386 4043226 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-909642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:36:52.916577 4043226 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 23:36:52.921287 4043226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:36:52.935167 4043226 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1218 23:36:52.935240 4043226 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:36:52.975562 4043226 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1218 23:36:52.975635 4043226 ssh_runner.go:195] Run: which lz4
	I1218 23:36:52.979997 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1218 23:36:52.980100 4043226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 23:36:52.984700 4043226 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 23:36:52.984743 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1218 23:36:55.125424 4043226 containerd.go:547] Took 2.145338 seconds to copy over tarball
	I1218 23:36:55.125511 4043226 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 23:36:57.859631 4043226 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.7340916s)
	I1218 23:36:57.859662 4043226 containerd.go:554] Took 2.734212 seconds to extract the tarball
	I1218 23:36:57.859673 4043226 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 23:36:57.946789 4043226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:36:58.052656 4043226 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 23:36:58.203012 4043226 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:36:58.250506 4043226 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1218 23:36:58.250538 4043226 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1218 23:36:58.250590 4043226 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:36:58.250777 4043226 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:36:58.250864 4043226 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:36:58.250956 4043226 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:36:58.251037 4043226 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:36:58.251112 4043226 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1218 23:36:58.251184 4043226 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1218 23:36:58.251249 4043226 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1218 23:36:58.252143 4043226 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:36:58.252651 4043226 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:36:58.252835 4043226 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:36:58.253115 4043226 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1218 23:36:58.253269 4043226 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1218 23:36:58.253409 4043226 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1218 23:36:58.253537 4043226 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:36:58.253677 4043226 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:36:58.605057 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c"
	I1218 23:36:58.605132 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.608934 4043226 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.609096 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7"
	I1218 23:36:58.609157 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.622480 4043226 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.622619 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79"
	I1218 23:36:58.622682 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.636425 4043226 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.636591 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c"
	I1218 23:36:58.636662 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.638978 4043226 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.639221 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18"
	I1218 23:36:58.639297 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.650075 4043226 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.650256 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03"
	I1218 23:36:58.650328 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.655159 4043226 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.655299 4043226 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257"
	I1218 23:36:58.655368 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W1218 23:36:58.763660 4043226 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1218 23:36:58.763775 4043226 containerd.go:251] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1218 23:36:58.763839 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 23:36:59.050414 4043226 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1218 23:36:59.050518 4043226 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1218 23:36:59.050595 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.051129 4043226 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1218 23:36:59.051177 4043226 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:36:59.051241 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.306740 4043226 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1218 23:36:59.306842 4043226 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:36:59.306931 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.423422 4043226 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1218 23:36:59.423470 4043226 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1218 23:36:59.423519 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.424039 4043226 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1218 23:36:59.424063 4043226 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:36:59.424090 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.424574 4043226 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1218 23:36:59.424597 4043226 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1218 23:36:59.424628 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.450843 4043226 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1218 23:36:59.450893 4043226 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:36:59.450951 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.482559 4043226 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1218 23:36:59.482642 4043226 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:36:59.482683 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1218 23:36:59.482732 4043226 ssh_runner.go:195] Run: which crictl
	I1218 23:36:59.482780 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:36:59.482652 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:36:59.482912 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1218 23:36:59.482976 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1218 23:36:59.483036 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:36:59.483067 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:36:59.690531 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1218 23:36:59.690611 4043226 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:36:59.690711 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1218 23:36:59.690779 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1218 23:36:59.690815 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1218 23:36:59.690858 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1218 23:36:59.690941 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1218 23:36:59.690966 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1218 23:36:59.745123 4043226 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1218 23:36:59.745162 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 -> /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:36:59.745249 4043226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:36:59.749657 4043226 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1218 23:36:59.749693 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1218 23:36:59.828149 4043226 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:36:59.828229 4043226 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:37:00.458185 4043226 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1218 23:37:00.458259 4043226 cache_images.go:92] LoadImages completed in 2.207706984s
	W1218 23:37:00.458335 4043226 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1218 23:37:00.458419 4043226 ssh_runner.go:195] Run: sudo crictl info
	I1218 23:37:00.505698 4043226 cni.go:84] Creating CNI manager for ""
	I1218 23:37:00.505726 4043226 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:37:00.505749 4043226 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:37:00.505768 4043226 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-909642 NodeName:ingress-addon-legacy-909642 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1218 23:37:00.505901 4043226 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-909642"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:37:00.505969 4043226 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-909642 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-909642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:37:00.506038 4043226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1218 23:37:00.517449 4043226 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:37:00.517531 4043226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 23:37:00.529874 4043226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1218 23:37:00.552556 4043226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1218 23:37:00.575016 4043226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1218 23:37:00.596445 4043226 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:37:00.601639 4043226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:37:00.615558 4043226 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642 for IP: 192.168.49.2
	I1218 23:37:00.615590 4043226 certs.go:190] acquiring lock for shared ca certs: {Name:mk406b12e6a80d6e5757943ee55b3a3d6680c96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:00.615724 4043226 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key
	I1218 23:37:00.615775 4043226 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key
	I1218 23:37:00.615852 4043226 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.key
	I1218 23:37:00.615868 4043226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt with IP's: []
	I1218 23:37:01.141107 4043226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt ...
	I1218 23:37:01.141140 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: {Name:mk8e5c5e6fee2cc31d4c0e52b2acb264c10d2236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:01.141353 4043226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.key ...
	I1218 23:37:01.141369 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.key: {Name:mk0ef1419381863d72f8119354a3ce62310f768b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:01.141463 4043226 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key.dd3b5fb2
	I1218 23:37:01.141480 4043226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 23:37:01.732972 4043226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt.dd3b5fb2 ...
	I1218 23:37:01.733008 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt.dd3b5fb2: {Name:mk8375681337e29f0c1b1277cdf7301619f2d926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:01.733200 4043226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key.dd3b5fb2 ...
	I1218 23:37:01.733217 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key.dd3b5fb2: {Name:mke6cb2b51f1ab8a3e59dd5daa8b741ed022506d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:01.733312 4043226 certs.go:337] copying /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt
	I1218 23:37:01.733404 4043226 certs.go:341] copying /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key
	I1218 23:37:01.733471 4043226 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.key
	I1218 23:37:01.733489 4043226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.crt with IP's: []
	I1218 23:37:02.070044 4043226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.crt ...
	I1218 23:37:02.070077 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.crt: {Name:mk437814cb15a18a805ebecebdb989470c24f891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:02.070268 4043226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.key ...
	I1218 23:37:02.070283 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.key: {Name:mk9f480b8694a9e82ba3d42bcecdf5758696ed16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:02.070368 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 23:37:02.070393 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 23:37:02.070405 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 23:37:02.070428 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 23:37:02.070445 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 23:37:02.070460 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 23:37:02.070478 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 23:37:02.070490 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 23:37:02.070547 4043226 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/4009779.pem (1338 bytes)
	W1218 23:37:02.070588 4043226 certs.go:433] ignoring /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/4009779_empty.pem, impossibly tiny 0 bytes
	I1218 23:37:02.070605 4043226 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:37:02.070636 4043226 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/ca.pem (1082 bytes)
	I1218 23:37:02.070668 4043226 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:37:02.070694 4043226 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/certs/key.pem (1675 bytes)
	I1218 23:37:02.070740 4043226 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/40097792.pem (1708 bytes)
	I1218 23:37:02.070774 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/4009779.pem -> /usr/share/ca-certificates/4009779.pem
	I1218 23:37:02.070791 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/40097792.pem -> /usr/share/ca-certificates/40097792.pem
	I1218 23:37:02.070803 4043226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:37:02.071396 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 23:37:02.102023 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 23:37:02.131590 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 23:37:02.162308 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 23:37:02.192380 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:37:02.222111 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 23:37:02.251880 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:37:02.281241 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 23:37:02.309997 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/certs/4009779.pem --> /usr/share/ca-certificates/4009779.pem (1338 bytes)
	I1218 23:37:02.339264 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/ssl/certs/40097792.pem --> /usr/share/ca-certificates/40097792.pem (1708 bytes)
	I1218 23:37:02.367844 4043226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:37:02.396694 4043226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 23:37:02.418813 4043226 ssh_runner.go:195] Run: openssl version
	I1218 23:37:02.426245 4043226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4009779.pem && ln -fs /usr/share/ca-certificates/4009779.pem /etc/ssl/certs/4009779.pem"
	I1218 23:37:02.438074 4043226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4009779.pem
	I1218 23:37:02.442797 4043226 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 23:33 /usr/share/ca-certificates/4009779.pem
	I1218 23:37:02.442891 4043226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4009779.pem
	I1218 23:37:02.451836 4043226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4009779.pem /etc/ssl/certs/51391683.0"
	I1218 23:37:02.464387 4043226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40097792.pem && ln -fs /usr/share/ca-certificates/40097792.pem /etc/ssl/certs/40097792.pem"
	I1218 23:37:02.476341 4043226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40097792.pem
	I1218 23:37:02.481287 4043226 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 23:33 /usr/share/ca-certificates/40097792.pem
	I1218 23:37:02.481414 4043226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40097792.pem
	I1218 23:37:02.490179 4043226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40097792.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 23:37:02.502907 4043226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:37:02.514835 4043226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:37:02.519511 4043226 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:37:02.519588 4043226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:37:02.528337 4043226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:37:02.540700 4043226 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:37:02.545867 4043226 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:37:02.545927 4043226 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-909642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-909642 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:37:02.546005 4043226 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 23:37:02.546080 4043226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:37:02.592445 4043226 cri.go:89] found id: ""
	I1218 23:37:02.592601 4043226 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 23:37:02.603915 4043226 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 23:37:02.615343 4043226 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 23:37:02.615423 4043226 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 23:37:02.626846 4043226 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:37:02.626903 4043226 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 23:37:02.685900 4043226 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1218 23:37:02.686178 4043226 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 23:37:02.739799 4043226 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:37:02.739873 4043226 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:37:02.739912 4043226 kubeadm.go:322] OS: Linux
	I1218 23:37:02.739963 4043226 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 23:37:02.740016 4043226 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 23:37:02.740065 4043226 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 23:37:02.740113 4043226 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 23:37:02.740163 4043226 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 23:37:02.740211 4043226 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 23:37:02.841171 4043226 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:37:02.841291 4043226 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:37:02.841388 4043226 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:37:03.091771 4043226 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:37:03.093701 4043226 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:37:03.093943 4043226 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 23:37:03.201261 4043226 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:37:03.204082 4043226 out.go:204]   - Generating certificates and keys ...
	I1218 23:37:03.204222 4043226 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 23:37:03.204319 4043226 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 23:37:04.229848 4043226 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:37:05.889023 4043226 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:37:06.230193 4043226 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 23:37:06.552579 4043226 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 23:37:07.074427 4043226 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 23:37:07.074779 4043226 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-909642 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:37:07.334734 4043226 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 23:37:07.335065 4043226 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-909642 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:37:07.507228 4043226 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:37:07.923642 4043226 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:37:08.569846 4043226 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 23:37:08.570105 4043226 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:37:08.811242 4043226 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:37:09.226071 4043226 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:37:09.676092 4043226 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:37:10.470623 4043226 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:37:10.471750 4043226 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:37:10.474470 4043226 out.go:204]   - Booting up control plane ...
	I1218 23:37:10.474591 4043226 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:37:10.490139 4043226 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:37:10.490230 4043226 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:37:10.490318 4043226 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:37:10.496889 4043226 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:37:22.501205 4043226 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.003024 seconds
	I1218 23:37:22.501323 4043226 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:37:22.515389 4043226 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:37:23.035313 4043226 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:37:23.035498 4043226 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-909642 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1218 23:37:23.545803 4043226 kubeadm.go:322] [bootstrap-token] Using token: ivoa7u.ypdvzijpk33z6x1f
	I1218 23:37:23.547831 4043226 out.go:204]   - Configuring RBAC rules ...
	I1218 23:37:23.547951 4043226 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:37:23.556225 4043226 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:37:23.565633 4043226 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:37:23.570230 4043226 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:37:23.574387 4043226 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:37:23.578976 4043226 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:37:23.589000 4043226 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:37:23.906430 4043226 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 23:37:23.977878 4043226 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 23:37:23.979652 4043226 kubeadm.go:322] 
	I1218 23:37:23.979733 4043226 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 23:37:23.979740 4043226 kubeadm.go:322] 
	I1218 23:37:23.979827 4043226 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 23:37:23.979833 4043226 kubeadm.go:322] 
	I1218 23:37:23.979857 4043226 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 23:37:23.980329 4043226 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:37:23.980392 4043226 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:37:23.980399 4043226 kubeadm.go:322] 
	I1218 23:37:23.980449 4043226 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 23:37:23.980520 4043226 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:37:23.980584 4043226 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:37:23.980590 4043226 kubeadm.go:322] 
	I1218 23:37:23.980954 4043226 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:37:23.981051 4043226 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 23:37:23.981056 4043226 kubeadm.go:322] 
	I1218 23:37:23.981399 4043226 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ivoa7u.ypdvzijpk33z6x1f \
	I1218 23:37:23.981503 4043226 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e312defcfa05e02bd60f7c592d29d4c5d570ecf2885804f11be3cfbfa6eee99b \
	I1218 23:37:23.981751 4043226 kubeadm.go:322]     --control-plane 
	I1218 23:37:23.981768 4043226 kubeadm.go:322] 
	I1218 23:37:23.982088 4043226 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:37:23.982098 4043226 kubeadm.go:322] 
	I1218 23:37:23.982410 4043226 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ivoa7u.ypdvzijpk33z6x1f \
	I1218 23:37:23.982758 4043226 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e312defcfa05e02bd60f7c592d29d4c5d570ecf2885804f11be3cfbfa6eee99b 
	I1218 23:37:23.986473 4043226 kubeadm.go:322] W1218 23:37:02.685018    1107 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1218 23:37:23.986682 4043226 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:37:23.986781 4043226 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:37:23.986913 4043226 kubeadm.go:322] W1218 23:37:10.485087    1107 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1218 23:37:23.987032 4043226 kubeadm.go:322] W1218 23:37:10.486607    1107 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1218 23:37:23.987047 4043226 cni.go:84] Creating CNI manager for ""
	I1218 23:37:23.987055 4043226 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:37:23.990634 4043226 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 23:37:23.992911 4043226 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:37:23.999089 4043226 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1218 23:37:23.999177 4043226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:37:24.025720 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:37:24.490226 4043226 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 23:37:24.490368 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:24.490453 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=ingress-addon-legacy-909642 minikube.k8s.io/updated_at=2023_12_18T23_37_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:24.633299 4043226 ops.go:34] apiserver oom_adj: -16
	I1218 23:37:24.633329 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:25.134258 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:25.633678 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:26.134364 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:26.634217 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:27.134104 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:27.634049 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:28.133832 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:28.633688 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:29.133751 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:29.633513 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:30.133541 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:30.633580 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:31.134020 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:31.633530 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:32.134238 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:32.633646 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:33.134420 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:33.633933 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:34.134402 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:34.633904 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:35.134237 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:35.634112 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:36.134270 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:36.633469 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:37.134366 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:37.634085 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:38.133904 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:38.634118 4043226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:37:38.757156 4043226 kubeadm.go:1088] duration metric: took 14.266830963s to wait for elevateKubeSystemPrivileges.
	I1218 23:37:38.757190 4043226 kubeadm.go:406] StartCluster complete in 36.21126864s
	I1218 23:37:38.757208 4043226 settings.go:142] acquiring lock: {Name:mkc0bc26fbf229b708fca267aea9769f0f259f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:38.757291 4043226 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:37:38.757981 4043226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/kubeconfig: {Name:mk056ad1e9e70ee26734d70551bb1d18ee8e2c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:37:38.758707 4043226 kapi.go:59] client config for ingress-addon-legacy-909642: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.key", CAFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:37:38.758961 4043226 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 23:37:38.759224 4043226 config.go:182] Loaded profile config "ingress-addon-legacy-909642": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1218 23:37:38.759269 4043226 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 23:37:38.759335 4043226 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-909642"
	I1218 23:37:38.759350 4043226 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-909642"
	I1218 23:37:38.759406 4043226 host.go:66] Checking if "ingress-addon-legacy-909642" exists ...
	I1218 23:37:38.759882 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Status}}
	I1218 23:37:38.760140 4043226 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-909642"
	I1218 23:37:38.760162 4043226 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-909642"
	I1218 23:37:38.760416 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Status}}
	I1218 23:37:38.761734 4043226 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 23:37:38.807078 4043226 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:37:38.809276 4043226 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:37:38.809299 4043226 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 23:37:38.809369 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:37:38.822239 4043226 kapi.go:59] client config for ingress-addon-legacy-909642: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.key", CAFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:37:38.824427 4043226 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-909642"
	I1218 23:37:38.824481 4043226 host.go:66] Checking if "ingress-addon-legacy-909642" exists ...
	I1218 23:37:38.825012 4043226 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-909642 --format={{.State.Status}}
	I1218 23:37:38.847415 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:37:38.858320 4043226 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 23:37:38.858347 4043226 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 23:37:38.858434 4043226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-909642
	I1218 23:37:38.894591 4043226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42691 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/ingress-addon-legacy-909642/id_rsa Username:docker}
	I1218 23:37:38.986108 4043226 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 23:37:39.204956 4043226 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 23:37:39.207688 4043226 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:37:39.263910 4043226 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-909642" context rescaled to 1 replicas
	I1218 23:37:39.263988 4043226 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 23:37:39.266160 4043226 out.go:177] * Verifying Kubernetes components...
	I1218 23:37:39.268390 4043226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:37:39.447666 4043226 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 23:37:39.811596 4043226 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1218 23:37:39.810158 4043226 kapi.go:59] client config for ingress-addon-legacy-909642: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.key", CAFile:"/home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:37:39.811917 4043226 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-909642" to be "Ready" ...
	I1218 23:37:39.813885 4043226 addons.go:502] enable addons completed in 1.054609294s: enabled=[default-storageclass storage-provisioner]
	I1218 23:37:39.821014 4043226 node_ready.go:49] node "ingress-addon-legacy-909642" has status "Ready":"True"
	I1218 23:37:39.821038 4043226 node_ready.go:38] duration metric: took 9.10364ms waiting for node "ingress-addon-legacy-909642" to be "Ready" ...
	I1218 23:37:39.821049 4043226 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:37:39.829475 4043226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-2rc2v" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:41.835865 4043226 pod_ready.go:102] pod "coredns-66bff467f8-2rc2v" in "kube-system" namespace has status "Ready":"False"
	I1218 23:37:44.336285 4043226 pod_ready.go:102] pod "coredns-66bff467f8-2rc2v" in "kube-system" namespace has status "Ready":"False"
	I1218 23:37:46.335920 4043226 pod_ready.go:92] pod "coredns-66bff467f8-2rc2v" in "kube-system" namespace has status "Ready":"True"
	I1218 23:37:46.335949 4043226 pod_ready.go:81] duration metric: took 6.506444256s waiting for pod "coredns-66bff467f8-2rc2v" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.335965 4043226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.340640 4043226 pod_ready.go:92] pod "etcd-ingress-addon-legacy-909642" in "kube-system" namespace has status "Ready":"True"
	I1218 23:37:46.340670 4043226 pod_ready.go:81] duration metric: took 4.696549ms waiting for pod "etcd-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.340685 4043226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.345834 4043226 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-909642" in "kube-system" namespace has status "Ready":"True"
	I1218 23:37:46.345862 4043226 pod_ready.go:81] duration metric: took 5.169268ms waiting for pod "kube-apiserver-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.345875 4043226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.351363 4043226 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-909642" in "kube-system" namespace has status "Ready":"True"
	I1218 23:37:46.351391 4043226 pod_ready.go:81] duration metric: took 5.507621ms waiting for pod "kube-controller-manager-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.351403 4043226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvtb5" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.356518 4043226 pod_ready.go:92] pod "kube-proxy-kvtb5" in "kube-system" namespace has status "Ready":"True"
	I1218 23:37:46.356546 4043226 pod_ready.go:81] duration metric: took 5.135857ms waiting for pod "kube-proxy-kvtb5" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.356558 4043226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.530995 4043226 request.go:629] Waited for 174.326671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-909642
	I1218 23:37:46.731084 4043226 request.go:629] Waited for 197.343755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-909642
	I1218 23:37:46.734036 4043226 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-909642" in "kube-system" namespace has status "Ready":"True"
	I1218 23:37:46.734064 4043226 pod_ready.go:81] duration metric: took 377.477081ms waiting for pod "kube-scheduler-ingress-addon-legacy-909642" in "kube-system" namespace to be "Ready" ...
	I1218 23:37:46.734094 4043226 pod_ready.go:38] duration metric: took 6.913017575s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:37:46.734123 4043226 api_server.go:52] waiting for apiserver process to appear ...
	I1218 23:37:46.734198 4043226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:37:46.747377 4043226 api_server.go:72] duration metric: took 7.483343634s to wait for apiserver process to appear ...
	I1218 23:37:46.747400 4043226 api_server.go:88] waiting for apiserver healthz status ...
	I1218 23:37:46.747420 4043226 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 23:37:46.756292 4043226 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 23:37:46.757482 4043226 api_server.go:141] control plane version: v1.18.20
	I1218 23:37:46.757515 4043226 api_server.go:131] duration metric: took 10.102739ms to wait for apiserver health ...
	I1218 23:37:46.757525 4043226 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 23:37:46.930796 4043226 request.go:629] Waited for 173.203602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:37:46.936765 4043226 system_pods.go:59] 8 kube-system pods found
	I1218 23:37:46.936803 4043226 system_pods.go:61] "coredns-66bff467f8-2rc2v" [1ea73b09-12ac-4310-a302-25c0453e847f] Running
	I1218 23:37:46.936811 4043226 system_pods.go:61] "etcd-ingress-addon-legacy-909642" [75fc4b46-1598-4315-be62-809cf0522929] Running
	I1218 23:37:46.936819 4043226 system_pods.go:61] "kindnet-9zfxw" [6e923276-be32-4852-ba60-7760b50c53a2] Running
	I1218 23:37:46.936826 4043226 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-909642" [93e4defb-ce4c-41b8-a4e3-54660982ae5d] Running
	I1218 23:37:46.936832 4043226 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-909642" [48b8b368-656b-4d41-b8d5-7e65cb01abe0] Running
	I1218 23:37:46.936837 4043226 system_pods.go:61] "kube-proxy-kvtb5" [19f25656-6019-4586-8c63-31bebb045fb5] Running
	I1218 23:37:46.936843 4043226 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-909642" [52cffdff-3937-4210-8c4d-11b8e1f76372] Running
	I1218 23:37:46.936848 4043226 system_pods.go:61] "storage-provisioner" [b6286130-1054-4e45-9f30-7a6f54d609de] Running
	I1218 23:37:46.936855 4043226 system_pods.go:74] duration metric: took 179.323879ms to wait for pod list to return data ...
	I1218 23:37:46.936908 4043226 default_sa.go:34] waiting for default service account to be created ...
	I1218 23:37:47.131358 4043226 request.go:629] Waited for 194.364765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1218 23:37:47.133897 4043226 default_sa.go:45] found service account: "default"
	I1218 23:37:47.133926 4043226 default_sa.go:55] duration metric: took 197.004935ms for default service account to be created ...
	I1218 23:37:47.133937 4043226 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 23:37:47.330974 4043226 request.go:629] Waited for 196.972722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:37:47.337033 4043226 system_pods.go:86] 8 kube-system pods found
	I1218 23:37:47.337071 4043226 system_pods.go:89] "coredns-66bff467f8-2rc2v" [1ea73b09-12ac-4310-a302-25c0453e847f] Running
	I1218 23:37:47.337079 4043226 system_pods.go:89] "etcd-ingress-addon-legacy-909642" [75fc4b46-1598-4315-be62-809cf0522929] Running
	I1218 23:37:47.337085 4043226 system_pods.go:89] "kindnet-9zfxw" [6e923276-be32-4852-ba60-7760b50c53a2] Running
	I1218 23:37:47.337090 4043226 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-909642" [93e4defb-ce4c-41b8-a4e3-54660982ae5d] Running
	I1218 23:37:47.337132 4043226 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-909642" [48b8b368-656b-4d41-b8d5-7e65cb01abe0] Running
	I1218 23:37:47.337147 4043226 system_pods.go:89] "kube-proxy-kvtb5" [19f25656-6019-4586-8c63-31bebb045fb5] Running
	I1218 23:37:47.337154 4043226 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-909642" [52cffdff-3937-4210-8c4d-11b8e1f76372] Running
	I1218 23:37:47.337160 4043226 system_pods.go:89] "storage-provisioner" [b6286130-1054-4e45-9f30-7a6f54d609de] Running
	I1218 23:37:47.337174 4043226 system_pods.go:126] duration metric: took 203.230459ms to wait for k8s-apps to be running ...
	I1218 23:37:47.337196 4043226 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:37:47.337279 4043226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:37:47.362264 4043226 system_svc.go:56] duration metric: took 25.058391ms WaitForService to wait for kubelet.
	I1218 23:37:47.362304 4043226 kubeadm.go:581] duration metric: took 8.098277073s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:37:47.362342 4043226 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:37:47.530906 4043226 request.go:629] Waited for 168.409232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1218 23:37:47.533794 4043226 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:37:47.533826 4043226 node_conditions.go:123] node cpu capacity is 2
	I1218 23:37:47.533837 4043226 node_conditions.go:105] duration metric: took 171.48263ms to run NodePressure ...
	I1218 23:37:47.533848 4043226 start.go:228] waiting for startup goroutines ...
	I1218 23:37:47.533883 4043226 start.go:233] waiting for cluster config update ...
	I1218 23:37:47.533901 4043226 start.go:242] writing updated cluster config ...
	I1218 23:37:47.534189 4043226 ssh_runner.go:195] Run: rm -f paused
	I1218 23:37:47.612482 4043226 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1218 23:37:47.614796 4043226 out.go:177] 
	W1218 23:37:47.616757 4043226 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1218 23:37:47.619080 4043226 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1218 23:37:47.621066 4043226 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-909642" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2092d174e8f9d       dd1b12fcb6097       13 seconds ago       Exited              hello-world-app           2                   50879e8485786       hello-world-app-5f5d8b66bb-h9n29
	49f471382def0       66749159455b3       34 seconds ago       Running             storage-provisioner       1                   ee625b92aff1c       storage-provisioner
	dd6fa506a07f1       f09fc93534f6a       37 seconds ago       Running             nginx                     0                   85819ca2b4055       nginx
	8498e7d97e8b8       d7f0cba3aa5bf       49 seconds ago       Exited              controller                0                   e349839b150a8       ingress-nginx-controller-7fcf777cb7-f24p2
	9ac2ec9f74152       a883f7fc35610       55 seconds ago       Exited              patch                     0                   041b2dbc5b306       ingress-nginx-admission-patch-296jw
	2f090d20d0ef0       a883f7fc35610       55 seconds ago       Exited              create                    0                   4e5e4a7412430       ingress-nginx-admission-create-zkjdl
	4c6900d98b299       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   da635dd53a798       coredns-66bff467f8-2rc2v
	c794bcdbc41a4       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   c4a011055fc1d       kindnet-9zfxw
	9ac713004c6a7       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   f2fcf198b1642       kube-proxy-kvtb5
	1abf17d71043d       66749159455b3       About a minute ago   Exited              storage-provisioner       0                   ee625b92aff1c       storage-provisioner
	d8b92086b7afe       095f37015706d       About a minute ago   Running             kube-scheduler            0                   deb2477e611ff       kube-scheduler-ingress-addon-legacy-909642
	c029df61d5a93       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   2d925d97dbfee       kube-apiserver-ingress-addon-legacy-909642
	69d0b39930e65       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   8c47c6ce375a3       kube-controller-manager-ingress-addon-legacy-909642
	9a25af9c2ba2a       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   fea0307ece677       etcd-ingress-addon-legacy-909642
	
	* 
	* ==> containerd <==
	* Dec 18 23:38:33 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:33.611760508Z" level=info msg="RemoveContainer for \"75c6649f7f5007aa2eb486b057eb0be052a9abbdc68249fa07a386ba6f01058e\""
	Dec 18 23:38:33 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:33.617321002Z" level=info msg="RemoveContainer for \"75c6649f7f5007aa2eb486b057eb0be052a9abbdc68249fa07a386ba6f01058e\" returns successfully"
	Dec 18 23:38:38 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:38.272728792Z" level=info msg="StopContainer for \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" with timeout 2 (s)"
	Dec 18 23:38:38 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:38.275667823Z" level=info msg="Stop container \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" with signal terminated"
	Dec 18 23:38:38 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:38.283302773Z" level=info msg="StopContainer for \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" with timeout 2 (s)"
	Dec 18 23:38:38 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:38.286766034Z" level=info msg="Skipping the sending of signal terminated to container \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" because a prior stop with timeout>0 request already sent the signal"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.287414576Z" level=info msg="Kill container \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\""
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.287485484Z" level=info msg="Kill container \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\""
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.369181088Z" level=info msg="shim disconnected" id=8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.369352993Z" level=warning msg="cleaning up after shim disconnected" id=8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2 namespace=k8s.io
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.369377846Z" level=info msg="cleaning up dead shim"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.380073935Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:38:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4581 runtime=io.containerd.runc.v2\n"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.382655422Z" level=info msg="StopContainer for \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" returns successfully"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.382785579Z" level=info msg="StopContainer for \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" returns successfully"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.383332923Z" level=info msg="StopPodSandbox for \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\""
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.383402961Z" level=info msg="Container to stop \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.383625376Z" level=info msg="StopPodSandbox for \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\""
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.383666886Z" level=info msg="Container to stop \"8498e7d97e8b831a44d5434de2deabba08a50a9b696a6380e451016c0bf447b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.425925919Z" level=info msg="shim disconnected" id=e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.426218191Z" level=warning msg="cleaning up after shim disconnected" id=e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e namespace=k8s.io
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.426247319Z" level=info msg="cleaning up dead shim"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.437406305Z" level=warning msg="cleanup warnings time=\"2023-12-18T23:38:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4618 runtime=io.containerd.runc.v2\n"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.478503082Z" level=error msg="StopPodSandbox for \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\" failed" error="failed to destroy network for sandbox \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-86854554bd8c2bdefae0e --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.503845106Z" level=info msg="TearDown network for sandbox \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\" successfully"
	Dec 18 23:38:40 ingress-addon-legacy-909642 containerd[825]: time="2023-12-18T23:38:40.503970250Z" level=info msg="StopPodSandbox for \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\" returns successfully"
	
	* 
	* ==> coredns [4c6900d98b299d9bf8b461fbd1208313e7b1966664a5b6533c906f03b9363ac0] <==
	* [INFO] 10.244.0.5:45831 - 49624 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00282992s
	[INFO] 10.244.0.5:52641 - 33244 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000848s
	[INFO] 10.244.0.5:52641 - 33129 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076357s
	[INFO] 10.244.0.5:45831 - 14560 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000181151s
	[INFO] 10.244.0.5:52641 - 16346 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001016502s
	[INFO] 10.244.0.5:52641 - 8430 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000875472s
	[INFO] 10.244.0.5:52641 - 6142 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067306s
	[INFO] 10.244.0.5:35341 - 41697 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089936s
	[INFO] 10.244.0.5:35341 - 20373 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082609s
	[INFO] 10.244.0.5:35341 - 56268 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093087s
	[INFO] 10.244.0.5:35341 - 65132 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006368s
	[INFO] 10.244.0.5:35341 - 52230 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061629s
	[INFO] 10.244.0.5:35341 - 37586 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054695s
	[INFO] 10.244.0.5:35341 - 44099 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001237456s
	[INFO] 10.244.0.5:35341 - 47590 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001312917s
	[INFO] 10.244.0.5:35341 - 34052 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000118498s
	[INFO] 10.244.0.5:33937 - 509 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062786s
	[INFO] 10.244.0.5:33937 - 10077 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042913s
	[INFO] 10.244.0.5:33937 - 2748 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000326644s
	[INFO] 10.244.0.5:33937 - 13678 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004009s
	[INFO] 10.244.0.5:33937 - 23296 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041558s
	[INFO] 10.244.0.5:33937 - 42446 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037555s
	[INFO] 10.244.0.5:33937 - 30107 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00111209s
	[INFO] 10.244.0.5:33937 - 27184 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000940802s
	[INFO] 10.244.0.5:33937 - 25395 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050002s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-909642
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-909642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=ingress-addon-legacy-909642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T23_37_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:37:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-909642
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:38:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:38:27 +0000   Mon, 18 Dec 2023 23:37:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:38:27 +0000   Mon, 18 Dec 2023 23:37:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:38:27 +0000   Mon, 18 Dec 2023 23:37:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:38:27 +0000   Mon, 18 Dec 2023 23:37:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-909642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9cf6256e5c5468fb04d377cb2e47a4a
	  System UUID:                3cd029de-f1d5-49ba-a2ea-59f6f5d197b9
	  Boot ID:                    890256b0-dbd9-440c-9da4-c1f4e1d4cc44
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-h9n29                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 coredns-66bff467f8-2rc2v                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     68s
	  kube-system                 etcd-ingress-addon-legacy-909642                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kindnet-9zfxw                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      67s
	  kube-system                 kube-apiserver-ingress-addon-legacy-909642             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-909642    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-kvtb5                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-ingress-addon-legacy-909642             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  93s (x4 over 93s)  kubelet     Node ingress-addon-legacy-909642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x5 over 93s)  kubelet     Node ingress-addon-legacy-909642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x4 over 93s)  kubelet     Node ingress-addon-legacy-909642 status is now: NodeHasSufficientPID
	  Normal  Starting                 79s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s                kubelet     Node ingress-addon-legacy-909642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet     Node ingress-addon-legacy-909642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s                kubelet     Node ingress-addon-legacy-909642 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                69s                kubelet     Node ingress-addon-legacy-909642 status is now: NodeReady
	  Normal  Starting                 65s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001332] FS-Cache: O-key=[8] 'd86f5c0100000000'
	[  +0.000788] FS-Cache: N-cookie c=0000024c [p=00000243 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=0000000091cac24e
	[  +0.001137] FS-Cache: N-key=[8] 'd86f5c0100000000'
	[  +0.002980] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=00000245 [p=00000243 fl=226 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=000000006e7dfdfc
	[  +0.001203] FS-Cache: O-key=[8] 'd86f5c0100000000'
	[  +0.000975] FS-Cache: N-cookie c=0000024d [p=00000243 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000a95112a5
	[  +0.001092] FS-Cache: N-key=[8] 'd86f5c0100000000'
	[  +2.815531] FS-Cache: Duplicate cookie detected
	[  +0.000778] FS-Cache: O-cookie c=00000244 [p=00000243 fl=226 nc=0 na=1]
	[  +0.001133] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=0000000030d65e10
	[  +0.001106] FS-Cache: O-key=[8] 'd76f5c0100000000'
	[  +0.000747] FS-Cache: N-cookie c=0000024f [p=00000243 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000307c66c2
	[  +0.001126] FS-Cache: N-key=[8] 'd76f5c0100000000'
	[  +0.414017] FS-Cache: Duplicate cookie detected
	[  +0.000804] FS-Cache: O-cookie c=00000249 [p=00000243 fl=226 nc=0 na=1]
	[  +0.001088] FS-Cache: O-cookie d=0000000068c060ec{9p.inode} n=000000005b4eca8b
	[  +0.001111] FS-Cache: O-key=[8] 'de6f5c0100000000'
	[  +0.000767] FS-Cache: N-cookie c=00000250 [p=00000243 fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=0000000068c060ec{9p.inode} n=00000000af7bd860
	[  +0.001137] FS-Cache: N-key=[8] 'de6f5c0100000000'
	
	* 
	* ==> etcd [9a25af9c2ba2ae452dc8b3ad161080f916734dccf1de807fb93bf8ca7776977b] <==
	* raft2023/12/18 23:37:13 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/18 23:37:13 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/18 23:37:13 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-18 23:37:13.983259 W | auth: simple token is not cryptographically signed
	2023-12-18 23:37:13.986357 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-18 23:37:13.992129 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-18 23:37:13.992278 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-18 23:37:13.992513 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-18 23:37:13.992574 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/18 23:37:13 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-18 23:37:13.993035 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/18 23:37:14 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/18 23:37:14 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/18 23:37:14 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/18 23:37:14 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/18 23:37:14 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-18 23:37:14.437092 I | etcdserver: published {Name:ingress-addon-legacy-909642 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-18 23:37:14.529004 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-18 23:37:14.540268 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-18 23:37:14.576893 I | embed: ready to serve client requests
	2023-12-18 23:37:14.588907 I | embed: ready to serve client requests
	2023-12-18 23:37:14.598221 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-18 23:37:14.900898 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-18 23:37:14.901227 W | etcdserver: request "ID:8128025905834490372 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (360.862839ms) to execute
	2023-12-18 23:37:15.573710 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  23:38:46 up 2 days,  7:21,  0 users,  load average: 1.47, 2.04, 2.31
	Linux ingress-addon-legacy-909642 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c794bcdbc41a45681305ad73fa53b2808920fb651c794b1ae751c26a2d68f489] <==
	* I1218 23:37:41.307815       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1218 23:37:41.308068       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1218 23:37:41.308292       1 main.go:116] setting mtu 1500 for CNI 
	I1218 23:37:41.308435       1 main.go:146] kindnetd IP family: "ipv4"
	I1218 23:37:41.308521       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1218 23:37:41.708624       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:41.708661       1 main.go:227] handling current node
	I1218 23:37:51.812818       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:51.812853       1 main.go:227] handling current node
	I1218 23:38:01.824075       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:38:01.824106       1 main.go:227] handling current node
	I1218 23:38:11.836394       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:38:11.836422       1 main.go:227] handling current node
	I1218 23:38:21.840037       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:38:21.840078       1 main.go:227] handling current node
	I1218 23:38:31.852382       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:38:31.852415       1 main.go:227] handling current node
	I1218 23:38:41.856476       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:38:41.856504       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c029df61d5a93c067a65b7200bcb4744cbe8e3502bd9d887a057b2c713d177ae] <==
	* I1218 23:37:20.884114       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1218 23:37:20.884152       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1218 23:37:20.884220       1 cache.go:39] Caches are synced for autoregister controller
	I1218 23:37:20.904002       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1218 23:37:20.905999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1218 23:37:21.686492       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1218 23:37:21.686532       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1218 23:37:21.694344       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1218 23:37:21.697822       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1218 23:37:21.697993       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1218 23:37:22.127712       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 23:37:22.181292       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1218 23:37:22.314777       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1218 23:37:22.315704       1 controller.go:609] quota admission added evaluator for: endpoints
	I1218 23:37:22.319762       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1218 23:37:22.611317       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 23:37:23.156976       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1218 23:37:23.892204       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1218 23:37:23.961571       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1218 23:37:38.874587       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1218 23:37:39.066375       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1218 23:37:48.549117       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1218 23:38:06.632317       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1218 23:38:38.288602       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1218 23:38:40.094183       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [69d0b39930e658e7806f38c8ac3b05edcb5294bff99b995ba2e6b251cdfb7c72] <==
	* I1218 23:37:39.012356       1 shared_informer.go:230] Caches are synced for GC 
	I1218 23:37:39.012647       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1218 23:37:39.013101       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-909642", UID:"e279b3a9-f7cb-4269-a3b0-42295cc76336", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-909642 event: Registered Node ingress-addon-legacy-909642 in Controller
	I1218 23:37:39.031880       1 range_allocator.go:373] Set node ingress-addon-legacy-909642 PodCIDR to [10.244.0.0/24]
	I1218 23:37:39.061476       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1218 23:37:39.061732       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1218 23:37:39.067071       1 shared_informer.go:230] Caches are synced for attach detach 
	I1218 23:37:39.085659       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"82f04c68-b6fa-410a-b841-f4154ec34a8b", APIVersion:"apps/v1", ResourceVersion:"220", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-kvtb5
	I1218 23:37:39.102418       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"6f3b4832-cee0-46a3-b6b6-c38a9675432c", APIVersion:"apps/v1", ResourceVersion:"229", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-9zfxw
	I1218 23:37:39.130734       1 shared_informer.go:230] Caches are synced for resource quota 
	E1218 23:37:39.141022       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"82f04c68-b6fa-410a-b841-f4154ec34a8b", ResourceVersion:"220", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63838539443, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400192de20), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x400192de40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400192de60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001968800), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x400192de80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400192dea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400192dee0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001960be0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001952ff8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000235730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40016aa250)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001953048)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1218 23:37:39.153270       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"6f3b4832-cee0-46a3-b6b6-c38a9675432c", ResourceVersion:"229", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63838539444, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400192df40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400192df60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400192df80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400192dfa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400192dfc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400192dfe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001994000)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001994040)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001960d70), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001953248), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002357a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40016aa258)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001953290)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1218 23:37:39.160533       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1218 23:37:39.160558       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1218 23:37:39.169197       1 shared_informer.go:230] Caches are synced for resource quota 
	I1218 23:37:39.224342       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1218 23:37:48.541600       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"1dc4bac3-0745-4f84-8a44-1349f872eed9", APIVersion:"apps/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1218 23:37:48.570923       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"b2849ef2-854a-4699-be9f-136b0a6f78ed", APIVersion:"apps/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-f24p2
	I1218 23:37:48.571753       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7f39a9a2-351c-4dd8-a4bd-c2540afca581", APIVersion:"batch/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-zkjdl
	I1218 23:37:48.655940       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e3d79a62-fab7-4830-b2c3-69c032f2bce2", APIVersion:"batch/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-296jw
	I1218 23:37:51.431174       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7f39a9a2-351c-4dd8-a4bd-c2540afca581", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1218 23:37:51.492820       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e3d79a62-fab7-4830-b2c3-69c032f2bce2", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1218 23:38:16.399952       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"55001c41-59d6-4130-9466-966f0c6b371d", APIVersion:"apps/v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1218 23:38:16.418573       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"868dfd1a-9fec-4699-ab43-27af1ec0bb77", APIVersion:"apps/v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-h9n29
	E1218 23:38:43.055121       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-vxkh6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [9ac713004c6a7f1950e9b999c20b2d0f33abbed62ccb481c0e3a1a59e52c5545] <==
	* W1218 23:37:41.278524       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1218 23:37:41.291947       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1218 23:37:41.291984       1 server_others.go:186] Using iptables Proxier.
	I1218 23:37:41.292405       1 server.go:583] Version: v1.18.20
	I1218 23:37:41.293499       1 config.go:315] Starting service config controller
	I1218 23:37:41.293704       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1218 23:37:41.293936       1 config.go:133] Starting endpoints config controller
	I1218 23:37:41.294062       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1218 23:37:41.394057       1 shared_informer.go:230] Caches are synced for service config 
	I1218 23:37:41.394283       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d8b92086b7afe4f13cba0531c2f8e2366fcef59b134a64de30e166b8be691a97] <==
	* W1218 23:37:20.834484       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1218 23:37:20.834490       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1218 23:37:20.889899       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1218 23:37:20.889935       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1218 23:37:20.897446       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1218 23:37:20.902798       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 23:37:20.903028       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 23:37:20.903174       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1218 23:37:20.909153       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 23:37:20.927948       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 23:37:20.928268       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 23:37:20.928483       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 23:37:20.928689       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 23:37:20.928904       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 23:37:20.929136       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 23:37:20.929325       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 23:37:20.929583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 23:37:20.929773       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 23:37:20.929974       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 23:37:20.930233       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 23:37:21.814631       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 23:37:21.827507       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 23:37:21.912128       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 23:37:21.933009       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1218 23:37:22.603303       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:32.317273    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d0a20fec29b013f4693be99422f4f1029502659a7183806c8b2e8906c2dab55a
	Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:32.334158    1660 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-v57kp" (UniqueName: "kubernetes.io/secret/1e7b3b70-eff7-4a87-baf6-ad256616f929-minikube-ingress-dns-token-v57kp") pod "1e7b3b70-eff7-4a87-baf6-ad256616f929" (UID: "1e7b3b70-eff7-4a87-baf6-ad256616f929")
	Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:32.340832    1660 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7b3b70-eff7-4a87-baf6-ad256616f929-minikube-ingress-dns-token-v57kp" (OuterVolumeSpecName: "minikube-ingress-dns-token-v57kp") pod "1e7b3b70-eff7-4a87-baf6-ad256616f929" (UID: "1e7b3b70-eff7-4a87-baf6-ad256616f929"). InnerVolumeSpecName "minikube-ingress-dns-token-v57kp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:32.434561    1660 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-v57kp" (UniqueName: "kubernetes.io/secret/1e7b3b70-eff7-4a87-baf6-ad256616f929-minikube-ingress-dns-token-v57kp") on node "ingress-addon-legacy-909642" DevicePath ""
	Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:32.603663    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d0a20fec29b013f4693be99422f4f1029502659a7183806c8b2e8906c2dab55a
	Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:32.604028    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2092d174e8f9db30ef8e0b1367f781bf02be2048fa8a9169f47e3e7fe546ea1e
	Dec 18 23:38:32 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:32.604274    1660 pod_workers.go:191] Error syncing pod f2001ade-f37f-4379-a1f9-58d035404853 ("hello-world-app-5f5d8b66bb-h9n29_default(f2001ade-f37f-4379-a1f9-58d035404853)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-h9n29_default(f2001ade-f37f-4379-a1f9-58d035404853)"
	Dec 18 23:38:33 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:33.609147    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 75c6649f7f5007aa2eb486b057eb0be052a9abbdc68249fa07a386ba6f01058e
	Dec 18 23:38:38 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:38.277240    1660 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-f24p2.17a211613373ca88", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-f24p2", UID:"11c12108-2159-4822-a4c5-3a336f91a87b", APIVersion:"v1", ResourceVersion:"455", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-909642"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15853df90395e88, ext:74484500509, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15853df90395e88, ext:74484500509, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-f24p2.17a211613373ca88" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 23:38:38 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:38.289490    1660 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-f24p2.17a211613373ca88", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-f24p2", UID:"11c12108-2159-4822-a4c5-3a336f91a87b", APIVersion:"v1", ResourceVersion:"455", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-909642"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15853df90395e88, ext:74484500509, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15853df90d83768, ext:74494910717, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-f24p2.17a211613373ca88" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 23:38:40 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:40.479032    1660 remote_runtime.go:128] StopPodSandbox "e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e": plugin type="portmap" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-86854554bd8c2bdefae0e --wait]: exit status 1: iptables: No chain/target/match by that name.
	Dec 18 23:38:40 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:40.479092    1660 kuberuntime_manager.go:912] Failed to stop sandbox {"containerd" "e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e"}
	Dec 18 23:38:40 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:40.479156    1660 kubelet.go:1598] error killing pod: failed to "KillPodSandbox" for "11c12108-2159-4822-a4c5-3a336f91a87b" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-86854554bd8c2bdefae0e --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Dec 18 23:38:40 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:40.479174    1660 pod_workers.go:191] Error syncing pod 11c12108-2159-4822-a4c5-3a336f91a87b ("ingress-nginx-controller-7fcf777cb7-f24p2_ingress-nginx(11c12108-2159-4822-a4c5-3a336f91a87b)"), skipping: error killing pod: failed to "KillPodSandbox" for "11c12108-2159-4822-a4c5-3a336f91a87b" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-86854554bd8c2bdefae0e --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Dec 18 23:38:40 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:40.481537    1660 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-f24p2.17a21161b6ff2549", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-f24p2", UID:"11c12108-2159-4822-a4c5-3a336f91a87b", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}, Reason:"FailedKillPod", Message:"error killing pod: failed t
o \"KillPodSandbox\" for \"11c12108-2159-4822-a4c5-3a336f91a87b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e\\\": plugin type=\\\"portmap\\\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-86854554bd8c2bdefae0e --wait]: exit status 1: iptables: No chain/target/match by that name.\\n\"", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-909642"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15853e01c8f2549, ext:76691448543, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15853e01c8f2549, ext:76691448543, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx
-controller-7fcf777cb7-f24p2.17a21161b6ff2549" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 23:38:40 ingress-addon-legacy-909642 kubelet[1660]: W1218 23:38:40.624180    1660 pod_container_deletor.go:77] Container "e349839b150a8551c0000523d14f010fba05f069749925a98b0478fcafbdaa8e" not found in pod's containers
	Dec 18 23:38:42 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:42.365379    1660 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-h29xq" (UniqueName: "kubernetes.io/secret/11c12108-2159-4822-a4c5-3a336f91a87b-ingress-nginx-token-h29xq") pod "11c12108-2159-4822-a4c5-3a336f91a87b" (UID: "11c12108-2159-4822-a4c5-3a336f91a87b")
	Dec 18 23:38:42 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:42.365444    1660 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/11c12108-2159-4822-a4c5-3a336f91a87b-webhook-cert") pod "11c12108-2159-4822-a4c5-3a336f91a87b" (UID: "11c12108-2159-4822-a4c5-3a336f91a87b")
	Dec 18 23:38:42 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:42.373263    1660 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c12108-2159-4822-a4c5-3a336f91a87b-ingress-nginx-token-h29xq" (OuterVolumeSpecName: "ingress-nginx-token-h29xq") pod "11c12108-2159-4822-a4c5-3a336f91a87b" (UID: "11c12108-2159-4822-a4c5-3a336f91a87b"). InnerVolumeSpecName "ingress-nginx-token-h29xq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:38:42 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:42.374093    1660 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c12108-2159-4822-a4c5-3a336f91a87b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "11c12108-2159-4822-a4c5-3a336f91a87b" (UID: "11c12108-2159-4822-a4c5-3a336f91a87b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:38:42 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:42.465888    1660 reconciler.go:319] Volume detached for volume "ingress-nginx-token-h29xq" (UniqueName: "kubernetes.io/secret/11c12108-2159-4822-a4c5-3a336f91a87b-ingress-nginx-token-h29xq") on node "ingress-addon-legacy-909642" DevicePath ""
	Dec 18 23:38:42 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:42.466093    1660 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/11c12108-2159-4822-a4c5-3a336f91a87b-webhook-cert") on node "ingress-addon-legacy-909642" DevicePath ""
	Dec 18 23:38:43 ingress-addon-legacy-909642 kubelet[1660]: W1218 23:38:43.322982    1660 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/11c12108-2159-4822-a4c5-3a336f91a87b/volumes" does not exist
	Dec 18 23:38:44 ingress-addon-legacy-909642 kubelet[1660]: I1218 23:38:44.317231    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2092d174e8f9db30ef8e0b1367f781bf02be2048fa8a9169f47e3e7fe546ea1e
	Dec 18 23:38:44 ingress-addon-legacy-909642 kubelet[1660]: E1218 23:38:44.317512    1660 pod_workers.go:191] Error syncing pod f2001ade-f37f-4379-a1f9-58d035404853 ("hello-world-app-5f5d8b66bb-h9n29_default(f2001ade-f37f-4379-a1f9-58d035404853)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-h9n29_default(f2001ade-f37f-4379-a1f9-58d035404853)"
	
	* 
	* ==> storage-provisioner [1abf17d71043d4f7beb0ee19cc8a8b33d4959c141e63a73071759e9b9436409f] <==
	* I1218 23:37:40.411033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1218 23:38:10.412603       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [49f471382def07c159a43594e1d3afbe6a7e77c26f3899d2537fba64f422a8d3] <==
	* I1218 23:38:11.627120       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 23:38:11.638833       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 23:38:11.638926       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 23:38:11.646923       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 23:38:11.648166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-909642_7d67eca8-e681-4edd-b570-a3938413dd1d!
	I1218 23:38:11.649565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3cac619f-3c41-4171-90f0-5ee9340acc9d", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-909642_7d67eca8-e681-4edd-b570-a3938413dd1d became leader
	I1218 23:38:11.748578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-909642_7d67eca8-e681-4edd-b570-a3938413dd1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-909642 -n ingress-addon-legacy-909642
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-909642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.09s)

                                                
                                    

Test pass (276/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.45
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 14.38
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.1
17 TestDownloadOnly/v1.29.0-rc.2/json-events 7.27
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.1
23 TestDownloadOnly/DeleteAll 0.25
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
26 TestBinaryMirror 1.17
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.18
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
32 TestAddons/Setup 138.59
34 TestAddons/parallel/Registry 15.89
36 TestAddons/parallel/InspektorGadget 10.88
37 TestAddons/parallel/MetricsServer 6.98
41 TestAddons/parallel/Headlamp 10.89
42 TestAddons/parallel/CloudSpanner 5.69
43 TestAddons/parallel/LocalPath 51.83
44 TestAddons/parallel/NvidiaDevicePlugin 5.86
47 TestAddons/serial/GCPAuth/Namespaces 0.19
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 36.11
50 TestCertExpiration 226.58
52 TestForceSystemdFlag 36.9
53 TestForceSystemdEnv 36.26
54 TestDockerEnvContainerd 47.1
59 TestErrorSpam/setup 30.27
60 TestErrorSpam/start 0.94
61 TestErrorSpam/status 1.17
62 TestErrorSpam/pause 1.93
63 TestErrorSpam/unpause 2.03
64 TestErrorSpam/stop 1.5
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 60.7
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.1
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.24
76 TestFunctional/serial/CacheCmd/cache/add_local 1.71
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
81 TestFunctional/serial/CacheCmd/cache/delete 0.15
82 TestFunctional/serial/MinikubeKubectlCmd 0.16
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 46.52
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.87
87 TestFunctional/serial/LogsFileCmd 1.87
88 TestFunctional/serial/InvalidService 4.93
90 TestFunctional/parallel/ConfigCmd 0.6
91 TestFunctional/parallel/DashboardCmd 11.17
92 TestFunctional/parallel/DryRun 0.54
93 TestFunctional/parallel/InternationalLanguage 0.28
94 TestFunctional/parallel/StatusCmd 1.54
98 TestFunctional/parallel/ServiceCmdConnect 10.9
99 TestFunctional/parallel/AddonsCmd 0.22
100 TestFunctional/parallel/PersistentVolumeClaim 26.67
102 TestFunctional/parallel/SSHCmd 0.83
103 TestFunctional/parallel/CpCmd 2.79
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.41
110 TestFunctional/parallel/NodeLabels 0.15
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
114 TestFunctional/parallel/License 0.34
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.59
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
127 TestFunctional/parallel/ServiceCmd/List 0.67
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
130 TestFunctional/parallel/ProfileCmd/profile_list 0.57
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.65
133 TestFunctional/parallel/MountCmd/any-port 8.16
134 TestFunctional/parallel/ServiceCmd/Format 0.74
135 TestFunctional/parallel/ServiceCmd/URL 0.59
136 TestFunctional/parallel/MountCmd/specific-port 2.67
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.2
138 TestFunctional/parallel/Version/short 0.09
139 TestFunctional/parallel/Version/components 1.35
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
144 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
145 TestFunctional/parallel/ImageCommands/Setup 2.52
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.66
156 TestFunctional/delete_addon-resizer_images 0.1
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 79.58
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.04
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
169 TestJSONOutput/start/Command 89.77
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.85
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.75
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.87
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.27
194 TestKicCustomNetwork/create_custom_network 42.38
195 TestKicCustomNetwork/use_default_bridge_network 36.14
196 TestKicExistingNetwork 36.87
197 TestKicCustomSubnet 34.39
198 TestKicStaticIP 39.57
199 TestMainNoArgs 0.07
200 TestMinikubeProfile 70.67
203 TestMountStart/serial/StartWithMountFirst 7.49
204 TestMountStart/serial/VerifyMountFirst 0.32
205 TestMountStart/serial/StartWithMountSecond 9.05
206 TestMountStart/serial/VerifyMountSecond 0.31
207 TestMountStart/serial/DeleteFirst 1.69
208 TestMountStart/serial/VerifyMountPostDelete 0.31
209 TestMountStart/serial/Stop 1.24
210 TestMountStart/serial/RestartStopped 7.78
211 TestMountStart/serial/VerifyMountPostStop 0.31
214 TestMultiNode/serial/FreshStart2Nodes 77.13
215 TestMultiNode/serial/DeployApp2Nodes 5.25
216 TestMultiNode/serial/PingHostFrom2Pods 1.13
217 TestMultiNode/serial/AddNode 16.78
218 TestMultiNode/serial/MultiNodeLabels 0.1
219 TestMultiNode/serial/ProfileList 0.4
220 TestMultiNode/serial/CopyFile 11.93
221 TestMultiNode/serial/StopNode 2.44
222 TestMultiNode/serial/StartAfterStop 12.27
223 TestMultiNode/serial/RestartKeepsNodes 119.56
224 TestMultiNode/serial/DeleteNode 5.25
225 TestMultiNode/serial/StopMultiNode 24.48
226 TestMultiNode/serial/RestartMultiNode 79.82
227 TestMultiNode/serial/ValidateNameConflict 34.43
232 TestPreload 146.34
234 TestScheduledStopUnix 108.16
237 TestInsufficientStorage 10.19
238 TestRunningBinaryUpgrade 90.59
240 TestKubernetesUpgrade 402.42
241 TestMissingContainerUpgrade 169.28
243 TestPause/serial/Start 91.37
244 TestPause/serial/SecondStartNoReconfiguration 6.35
245 TestPause/serial/Pause 0.78
246 TestPause/serial/VerifyStatus 0.4
247 TestPause/serial/Unpause 0.77
248 TestPause/serial/PauseAgain 0.97
249 TestPause/serial/DeletePaused 2.62
250 TestPause/serial/VerifyDeletedResources 0.16
251 TestStoppedBinaryUpgrade/Setup 1.22
252 TestStoppedBinaryUpgrade/Upgrade 101.97
253 TestStoppedBinaryUpgrade/MinikubeLogs 1.44
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
263 TestNoKubernetes/serial/StartWithK8s 32.38
264 TestNoKubernetes/serial/StartWithStopK8s 8.33
265 TestNoKubernetes/serial/Start 6.36
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
267 TestNoKubernetes/serial/ProfileList 1.17
268 TestNoKubernetes/serial/Stop 1.34
269 TestNoKubernetes/serial/StartNoArgs 8
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
278 TestNetworkPlugins/group/false 5.81
283 TestStartStop/group/old-k8s-version/serial/FirstStart 130.73
285 TestStartStop/group/no-preload/serial/FirstStart 74.91
286 TestStartStop/group/old-k8s-version/serial/DeployApp 9.75
287 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.75
288 TestStartStop/group/old-k8s-version/serial/Stop 13.14
289 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
290 TestStartStop/group/old-k8s-version/serial/SecondStart 666.03
291 TestStartStop/group/no-preload/serial/DeployApp 9.39
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.38
293 TestStartStop/group/no-preload/serial/Stop 12.18
294 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
295 TestStartStop/group/no-preload/serial/SecondStart 345.62
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
299 TestStartStop/group/no-preload/serial/Pause 3.56
301 TestStartStop/group/embed-certs/serial/FirstStart 61.77
302 TestStartStop/group/embed-certs/serial/DeployApp 9.38
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
304 TestStartStop/group/embed-certs/serial/Stop 12.1
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
306 TestStartStop/group/embed-certs/serial/SecondStart 341.97
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
310 TestStartStop/group/old-k8s-version/serial/Pause 3.57
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.46
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 345.63
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
321 TestStartStop/group/embed-certs/serial/Pause 3.52
323 TestStartStop/group/newest-cni/serial/FirstStart 52.86
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.52
326 TestStartStop/group/newest-cni/serial/Stop 1.33
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
328 TestStartStop/group/newest-cni/serial/SecondStart 32.29
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
332 TestStartStop/group/newest-cni/serial/Pause 3.73
333 TestNetworkPlugins/group/auto/Start 89.65
334 TestNetworkPlugins/group/auto/KubeletFlags 0.43
335 TestNetworkPlugins/group/auto/NetCatPod 10.38
336 TestNetworkPlugins/group/auto/DNS 0.26
337 TestNetworkPlugins/group/auto/Localhost 0.21
338 TestNetworkPlugins/group/auto/HairPin 0.21
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
340 TestNetworkPlugins/group/kindnet/Start 90.57
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.34
344 TestNetworkPlugins/group/calico/Start 79.22
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
347 TestNetworkPlugins/group/kindnet/NetCatPod 9.3
348 TestNetworkPlugins/group/calico/ControllerPod 6.01
349 TestNetworkPlugins/group/calico/KubeletFlags 0.38
350 TestNetworkPlugins/group/calico/NetCatPod 9.28
351 TestNetworkPlugins/group/kindnet/DNS 0.29
352 TestNetworkPlugins/group/kindnet/Localhost 0.24
353 TestNetworkPlugins/group/kindnet/HairPin 0.17
354 TestNetworkPlugins/group/calico/DNS 0.31
355 TestNetworkPlugins/group/calico/Localhost 0.44
356 TestNetworkPlugins/group/calico/HairPin 0.21
357 TestNetworkPlugins/group/custom-flannel/Start 70.67
358 TestNetworkPlugins/group/enable-default-cni/Start 53.29
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.3
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
366 TestNetworkPlugins/group/custom-flannel/DNS 0.19
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
369 TestNetworkPlugins/group/flannel/Start 61.55
370 TestNetworkPlugins/group/bridge/Start 88.98
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
373 TestNetworkPlugins/group/flannel/NetCatPod 9.26
374 TestNetworkPlugins/group/flannel/DNS 0.2
375 TestNetworkPlugins/group/flannel/Localhost 0.17
376 TestNetworkPlugins/group/flannel/HairPin 0.17
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
378 TestNetworkPlugins/group/bridge/NetCatPod 9.4
379 TestNetworkPlugins/group/bridge/DNS 0.17
380 TestNetworkPlugins/group/bridge/Localhost 0.16
381 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (17.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-037071 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-037071 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.454474413s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-037071
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-037071: exit status 85 (92.61276ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-037071 | jenkins | v1.32.0 | 18 Dec 23 23:25 UTC |          |
	|         | -p download-only-037071        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:25:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:25:54.990065 4009784 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:25:54.990258 4009784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:25:54.990268 4009784 out.go:309] Setting ErrFile to fd 2...
	I1218 23:25:54.990274 4009784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:25:54.990539 4009784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	W1218 23:25:54.990698 4009784 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-4004447/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-4004447/.minikube/config/config.json: no such file or directory
	I1218 23:25:54.991172 4009784 out.go:303] Setting JSON to true
	I1218 23:25:54.992085 4009784 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":198498,"bootTime":1702743457,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:25:54.992155 4009784 start.go:138] virtualization:  
	I1218 23:25:54.994854 4009784 out.go:97] [download-only-037071] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:25:54.997150 4009784 out.go:169] MINIKUBE_LOCATION=17822
	W1218 23:25:54.995121 4009784 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball: no such file or directory
	I1218 23:25:54.995189 4009784 notify.go:220] Checking for updates...
	I1218 23:25:55.006735 4009784 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:25:55.012487 4009784 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:25:55.014976 4009784 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:25:55.017307 4009784 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 23:25:55.022191 4009784 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 23:25:55.022526 4009784 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:25:55.048888 4009784 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:25:55.049011 4009784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:25:55.129759 4009784 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-18 23:25:55.119049663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:25:55.129926 4009784 docker.go:295] overlay module found
	I1218 23:25:55.132230 4009784 out.go:97] Using the docker driver based on user configuration
	I1218 23:25:55.132266 4009784 start.go:298] selected driver: docker
	I1218 23:25:55.132280 4009784 start.go:902] validating driver "docker" against <nil>
	I1218 23:25:55.132391 4009784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:25:55.205151 4009784 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-18 23:25:55.195387445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:25:55.205314 4009784 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:25:55.205604 4009784 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1218 23:25:55.205760 4009784 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 23:25:55.215712 4009784 out.go:169] Using Docker driver with root privileges
	I1218 23:25:55.224258 4009784 cni.go:84] Creating CNI manager for ""
	I1218 23:25:55.224287 4009784 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:25:55.224301 4009784 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:25:55.224314 4009784 start_flags.go:323] config:
	{Name:download-only-037071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-037071 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:25:55.227536 4009784 out.go:97] Starting control plane node download-only-037071 in cluster download-only-037071
	I1218 23:25:55.227589 4009784 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1218 23:25:55.229522 4009784 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:25:55.229558 4009784 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1218 23:25:55.229615 4009784 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:25:55.248117 4009784 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:25:55.248910 4009784 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:25:55.249045 4009784 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:25:55.305194 4009784 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1218 23:25:55.305220 4009784 cache.go:56] Caching tarball of preloaded images
	I1218 23:25:55.305393 4009784 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1218 23:25:55.307915 4009784 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1218 23:25:55.307939 4009784 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:25:55.427548 4009784 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1218 23:26:01.830808 4009784 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 23:26:07.940449 4009784 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:26:07.940556 4009784 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:26:09.055994 4009784 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1218 23:26:09.056407 4009784 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/download-only-037071/config.json ...
	I1218 23:26:09.056445 4009784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/download-only-037071/config.json: {Name:mk2da4a2ab61374f7812c1baefeb0e578f608f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:26:09.057123 4009784 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1218 23:26:09.057350 4009784 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-037071"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-037071 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-037071 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.37643421s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-037071
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-037071: exit status 85 (96.531149ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-037071 | jenkins | v1.32.0 | 18 Dec 23 23:25 UTC |          |
	|         | -p download-only-037071        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-037071 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |          |
	|         | -p download-only-037071        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:26:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:26:12.552493 4009857 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:26:12.552708 4009857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:12.552735 4009857 out.go:309] Setting ErrFile to fd 2...
	I1218 23:26:12.552755 4009857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:12.553072 4009857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	W1218 23:26:12.553228 4009857 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-4004447/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-4004447/.minikube/config/config.json: no such file or directory
	I1218 23:26:12.553520 4009857 out.go:303] Setting JSON to true
	I1218 23:26:12.554467 4009857 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":198516,"bootTime":1702743457,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:26:12.554563 4009857 start.go:138] virtualization:  
	I1218 23:26:12.556960 4009857 out.go:97] [download-only-037071] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:26:12.558992 4009857 out.go:169] MINIKUBE_LOCATION=17822
	I1218 23:26:12.557295 4009857 notify.go:220] Checking for updates...
	I1218 23:26:12.563034 4009857 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:26:12.565131 4009857 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:26:12.567269 4009857 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:26:12.569438 4009857 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 23:26:12.573733 4009857 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 23:26:12.574261 4009857 config.go:182] Loaded profile config "download-only-037071": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1218 23:26:12.574347 4009857 start.go:810] api.Load failed for download-only-037071: filestore "download-only-037071": Docker machine "download-only-037071" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 23:26:12.574453 4009857 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 23:26:12.574480 4009857 start.go:810] api.Load failed for download-only-037071: filestore "download-only-037071": Docker machine "download-only-037071" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 23:26:12.598977 4009857 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:26:12.599095 4009857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:12.683251 4009857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:12.673136737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:12.683356 4009857 docker.go:295] overlay module found
	I1218 23:26:12.685035 4009857 out.go:97] Using the docker driver based on existing profile
	I1218 23:26:12.685062 4009857 start.go:298] selected driver: docker
	I1218 23:26:12.685069 4009857 start.go:902] validating driver "docker" against &{Name:download-only-037071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-037071 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:26:12.685252 4009857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:12.750785 4009857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:12.741075776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:12.751268 4009857 cni.go:84] Creating CNI manager for ""
	I1218 23:26:12.751287 4009857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:26:12.751301 4009857 start_flags.go:323] config:
	{Name:download-only-037071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-037071 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s GPUs:}
	I1218 23:26:12.753294 4009857 out.go:97] Starting control plane node download-only-037071 in cluster download-only-037071
	I1218 23:26:12.753322 4009857 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1218 23:26:12.755003 4009857 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:26:12.755028 4009857 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:26:12.755131 4009857 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:26:12.772379 4009857 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:26:12.772523 4009857 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:26:12.772560 4009857 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 23:26:12.772565 4009857 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 23:26:12.772573 4009857 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 23:26:12.827947 4009857 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I1218 23:26:12.827972 4009857 cache.go:56] Caching tarball of preloaded images
	I1218 23:26:12.828821 4009857 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 23:26:12.830997 4009857 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1218 23:26:12.831019 4009857 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I1218 23:26:12.949059 4009857 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-037071"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (7.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-037071 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-037071 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.267018512s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (7.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
--- PASS: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-037071
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-037071: exit status 85 (98.798045ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-037071 | jenkins | v1.32.0 | 18 Dec 23 23:25 UTC |          |
	|         | -p download-only-037071           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-037071 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |          |
	|         | -p download-only-037071           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-037071 | jenkins | v1.32.0 | 18 Dec 23 23:26 UTC |          |
	|         | -p download-only-037071           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:26:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:26:27.022659 4009931 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:26:27.022912 4009931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:27.022923 4009931 out.go:309] Setting ErrFile to fd 2...
	I1218 23:26:27.022930 4009931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:26:27.023200 4009931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	W1218 23:26:27.023394 4009931 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-4004447/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-4004447/.minikube/config/config.json: no such file or directory
	I1218 23:26:27.023691 4009931 out.go:303] Setting JSON to true
	I1218 23:26:27.024615 4009931 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":198530,"bootTime":1702743457,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:26:27.024691 4009931 start.go:138] virtualization:  
	I1218 23:26:27.027210 4009931 out.go:97] [download-only-037071] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:26:27.029600 4009931 out.go:169] MINIKUBE_LOCATION=17822
	I1218 23:26:27.027488 4009931 notify.go:220] Checking for updates...
	I1218 23:26:27.033543 4009931 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:26:27.035548 4009931 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:26:27.037540 4009931 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:26:27.039481 4009931 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 23:26:27.043652 4009931 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 23:26:27.044199 4009931 config.go:182] Loaded profile config "download-only-037071": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	W1218 23:26:27.044288 4009931 start.go:810] api.Load failed for download-only-037071: filestore "download-only-037071": Docker machine "download-only-037071" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 23:26:27.044396 4009931 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 23:26:27.044426 4009931 start.go:810] api.Load failed for download-only-037071: filestore "download-only-037071": Docker machine "download-only-037071" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 23:26:27.071206 4009931 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:26:27.071303 4009931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:27.157740 4009931 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:27.146962168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:27.157843 4009931 docker.go:295] overlay module found
	I1218 23:26:27.160111 4009931 out.go:97] Using the docker driver based on existing profile
	I1218 23:26:27.160156 4009931 start.go:298] selected driver: docker
	I1218 23:26:27.160163 4009931 start.go:902] validating driver "docker" against &{Name:download-only-037071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-037071 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:26:27.160338 4009931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:26:27.227587 4009931 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-18 23:26:27.216575804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:26:27.228057 4009931 cni.go:84] Creating CNI manager for ""
	I1218 23:26:27.228078 4009931 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 23:26:27.228091 4009931 start_flags.go:323] config:
	{Name:download-only-037071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-037071 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPause
Interval:1m0s GPUs:}
	I1218 23:26:27.230295 4009931 out.go:97] Starting control plane node download-only-037071 in cluster download-only-037071
	I1218 23:26:27.230328 4009931 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1218 23:26:27.232314 4009931 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:26:27.232376 4009931 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1218 23:26:27.232579 4009931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:26:27.250363 4009931 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:26:27.250529 4009931 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:26:27.250554 4009931 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 23:26:27.250563 4009931 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 23:26:27.250572 4009931 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	W1218 23:26:27.311917 4009931 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1218 23:26:27.312086 4009931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/download-only-037071/config.json ...
	I1218 23:26:27.312196 4009931 cache.go:107] acquiring lock: {Name:mkc1a84b139bdf44284b3654febf63f72f61bfb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.312347 4009931 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1218 23:26:27.312437 4009931 cache.go:107] acquiring lock: {Name:mk2025e8f4f8bdec071ed12367a2ab5f995263d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.312666 4009931 cache.go:107] acquiring lock: {Name:mk42d38718803182cc90c3d7a1588a9718fdb056 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.313021 4009931 cache.go:107] acquiring lock: {Name:mk48da9fbc9a0d7811ab61d57a722dbb4f15eeaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.313275 4009931 cache.go:107] acquiring lock: {Name:mkb9fbc7066b4d446383d7a58bf06c041ca01423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.313291 4009931 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:26:27.313735 4009931 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	I1218 23:26:27.313880 4009931 cache.go:107] acquiring lock: {Name:mk9a04483cda4e95ce9d0165614724f9cbc388da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.314099 4009931 cache.go:107] acquiring lock: {Name:mk5b4bc2e23493d43a5428fe1a4e93f6d8b67779 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.314327 4009931 cache.go:107] acquiring lock: {Name:mkffbbddf2f32ab6f1b18efd101a844ec4428432 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:26:27.314449 4009931 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1218 23:26:27.314658 4009931 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1218 23:26:27.315492 4009931 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1218 23:26:27.315949 4009931 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1218 23:26:27.316128 4009931 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:26:27.316563 4009931 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubelet
	I1218 23:26:27.316703 4009931 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1218 23:26:27.316887 4009931 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1218 23:26:27.316979 4009931 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1218 23:26:27.317669 4009931 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubeadm
	I1218 23:26:27.318010 4009931 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1218 23:26:27.318801 4009931 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1218 23:26:27.319662 4009931 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1218 23:26:27.320138 4009931 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1218 23:26:27.320165 4009931 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1218 23:26:27.320278 4009931 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1218 23:26:27.320705 4009931 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1218 23:26:27.682325 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1218 23:26:27.702799 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I1218 23:26:27.714448 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1218 23:26:27.720211 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1218 23:26:27.720302 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1218 23:26:27.735974 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I1218 23:26:27.741195 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I1218 23:26:27.797454 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1218 23:26:27.797480 4009931 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 483.604348ms
	I1218 23:26:27.797529 4009931 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	W1218 23:26:27.862406 4009931 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1218 23:26:27.862516 4009931 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1218 23:26:28.197961 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1218 23:26:28.197990 4009931 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 885.800444ms
	I1218 23:26:28.198005 4009931 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1218 23:26:28.979781 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I1218 23:26:28.979808 4009931 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 1.666535326s
	I1218 23:26:28.979850 4009931 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I1218 23:26:29.348104 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I1218 23:26:29.348139 4009931 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 2.035120254s
	I1218 23:26:29.348153 4009931 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I1218 23:26:29.407882 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1218 23:26:29.407911 4009931 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.093603163s
	I1218 23:26:29.407925 4009931 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1218 23:26:29.933937 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I1218 23:26:29.933965 4009931 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 2.621534525s
	I1218 23:26:29.933981 4009931 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I1218 23:26:29.974637 4009931 cache.go:157] /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I1218 23:26:29.974670 4009931 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 2.662006821s
	I1218 23:26:29.974685 4009931 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/17822-4004447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-037071"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-037071
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (1.17s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-180531 --alsologtostderr --binary-mirror http://127.0.0.1:34359 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-180531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-180531
--- PASS: TestBinaryMirror (1.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-505406
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-505406: exit status 85 (177.590202ms)

                                                
                                                
-- stdout --
	* Profile "addons-505406" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-505406"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-505406
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-505406: exit status 85 (209.795176ms)

                                                
                                                
-- stdout --
	* Profile "addons-505406" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-505406"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (138.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-505406 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-505406 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m18.587495264s)
--- PASS: TestAddons/Setup (138.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 40.65321ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wzr22" [24ef522d-90a4-4844-810a-182a22d8094c] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005233409s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x7nmz" [953b9840-520c-42b3-8b05-574b76391cd3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006375453s
addons_test.go:339: (dbg) Run:  kubectl --context addons-505406 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-505406 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-505406 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.567439141s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 ip
2023/12/18 23:29:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dqs7h" [315ece5b-b81c-4323-a9cd-ba4608ad1330] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004337276s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-505406
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-505406: (5.873824069s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 7.717977ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zjzgk" [d76dd58c-71b2-415f-b318-39b1117343c1] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00447724s
addons_test.go:414: (dbg) Run:  kubectl --context addons-505406 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-505406 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-505406 --alsologtostderr -v=1: (1.884081363s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-q67zw" [d907d5cf-cade-4543-bb22-171c71c61f9e] Pending
helpers_test.go:344: "headlamp-777fd4b855-q67zw" [d907d5cf-cade-4543-bb22-171c71c61f9e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-q67zw" [d907d5cf-cade-4543-bb22-171c71c61f9e] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004432822s
--- PASS: TestAddons/parallel/Headlamp (10.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-j6mj7" [22ab4371-2bc2-486a-9ff5-6fd54c8888c0] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004601905s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-505406
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-505406 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-505406 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505406 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1f15c335-a06d-4607-89b4-de601b704485] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1f15c335-a06d-4607-89b4-de601b704485] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1f15c335-a06d-4607-89b4-de601b704485] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005382115s
addons_test.go:890: (dbg) Run:  kubectl --context addons-505406 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 ssh "cat /opt/local-path-provisioner/pvc-f97d0987-3d5a-4fdc-9a42-f57119c663d4_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-505406 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-505406 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-505406 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-505406 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.369867051s)
--- PASS: TestAddons/parallel/LocalPath (51.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sr2zs" [5b296706-5778-44a8-a5fe-4eeaec480f20] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.058462586s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-505406
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-505406 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-505406 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-505406
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-505406: (12.136872088s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-505406
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-505406
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-505406
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (36.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-618814 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-618814 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.994600246s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-618814 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-618814 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-618814 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-618814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-618814
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-618814: (2.082358022s)
--- PASS: TestCertOptions (36.11s)

                                                
                                    
x
+
TestCertExpiration (226.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-198816 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-198816 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.231440931s)
E1219 00:05:19.854277 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-198816 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-198816 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.995216849s)
helpers_test.go:175: Cleaning up "cert-expiration-198816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-198816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-198816: (2.348061719s)
--- PASS: TestCertExpiration (226.58s)

                                                
                                    
x
+
TestForceSystemdFlag (36.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-373150 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-373150 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.522196101s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-373150 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-373150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-373150
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-373150: (2.027800745s)
--- PASS: TestForceSystemdFlag (36.90s)

                                                
                                    
x
+
TestForceSystemdEnv (36.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-840653 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1219 00:02:58.409702 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-840653 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.750162555s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-840653 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-840653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-840653
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-840653: (2.032993801s)
--- PASS: TestForceSystemdEnv (36.26s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.1s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-310194 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-310194 --driver=docker  --container-runtime=containerd: (30.800577311s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-310194"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-310194": (1.319432608s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-GSqFxjEakJAG/agent.4026773" SSH_AGENT_PID="4026774" DOCKER_HOST=ssh://docker@127.0.0.1:42676 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-GSqFxjEakJAG/agent.4026773" SSH_AGENT_PID="4026774" DOCKER_HOST=ssh://docker@127.0.0.1:42676 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-GSqFxjEakJAG/agent.4026773" SSH_AGENT_PID="4026774" DOCKER_HOST=ssh://docker@127.0.0.1:42676 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.466621213s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-GSqFxjEakJAG/agent.4026773" SSH_AGENT_PID="4026774" DOCKER_HOST=ssh://docker@127.0.0.1:42676 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-310194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-310194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-310194: (2.014101629s)
--- PASS: TestDockerEnvContainerd (47.10s)

                                                
                                    
x
+
TestErrorSpam/setup (30.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-124473 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-124473 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-124473 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-124473 --driver=docker  --container-runtime=containerd: (30.265672553s)
--- PASS: TestErrorSpam/setup (30.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 pause
--- PASS: TestErrorSpam/pause (1.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 unpause
--- PASS: TestErrorSpam/unpause (2.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 stop: (1.263143258s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124473 --log_dir /tmp/nospam-124473 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17822-4004447/.minikube/files/etc/test/nested/copy/4009779/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-773431 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1218 23:33:55.850710 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:55.856407 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:55.866649 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:55.887009 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:55.927368 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:56.007733 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:56.168109 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:56.488637 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:57.129320 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:33:58.409540 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:34:00.970343 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:34:06.091163 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-773431 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m0.700899725s)
--- PASS: TestFunctional/serial/StartWithProxy (60.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-773431 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-773431 --alsologtostderr -v=8: (6.083972437s)
functional_test.go:659: soft start took 6.094617063s for "functional-773431" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-773431 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 cache add registry.k8s.io/pause:3.1: (1.523357997s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 cache add registry.k8s.io/pause:3.3: (1.414242542s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cache add registry.k8s.io/pause:latest
E1218 23:34:16.331990 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 cache add registry.k8s.io/pause:latest: (1.299889003s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-773431 /tmp/TestFunctionalserialCacheCmdcacheadd_local4105838817/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cache add minikube-local-cache-test:functional-773431
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 cache add minikube-local-cache-test:functional-773431: (1.216005435s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cache delete minikube-local-cache-test:functional-773431
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-773431
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (355.102928ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 cache reload: (1.135711365s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 kubectl -- --context functional-773431 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-773431 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-773431 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 23:34:36.813008 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-773431 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.518882416s)
functional_test.go:757: restart took 46.518974419s for "functional-773431" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-773431 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 logs: (1.869750351s)
--- PASS: TestFunctional/serial/LogsCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 logs --file /tmp/TestFunctionalserialLogsFileCmd2345826942/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 logs --file /tmp/TestFunctionalserialLogsFileCmd2345826942/001/logs.txt: (1.869427071s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-773431 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-773431
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-773431: exit status 115 (469.144487ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32147 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-773431 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-773431 delete -f testdata/invalidsvc.yaml: (1.198891515s)
--- PASS: TestFunctional/serial/InvalidService (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 config get cpus
E1218 23:35:17.774313 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 config get cpus: exit status 14 (96.69512ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 config get cpus: exit status 14 (95.269621ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-773431 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-773431 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4040444: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-773431 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-773431 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (232.168026ms)

                                                
                                                
-- stdout --
	* [functional-773431] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:35:52.244530 4040163 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:35:52.244720 4040163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:35:52.244749 4040163 out.go:309] Setting ErrFile to fd 2...
	I1218 23:35:52.244773 4040163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:35:52.245096 4040163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:35:52.245550 4040163 out.go:303] Setting JSON to false
	I1218 23:35:52.246940 4040163 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":199096,"bootTime":1702743457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:35:52.247048 4040163 start.go:138] virtualization:  
	I1218 23:35:52.249863 4040163 out.go:177] * [functional-773431] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:35:52.253019 4040163 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:35:52.254980 4040163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:35:52.253070 4040163 notify.go:220] Checking for updates...
	I1218 23:35:52.256990 4040163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:35:52.264397 4040163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:35:52.266701 4040163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:35:52.268809 4040163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:35:52.271670 4040163 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:35:52.272377 4040163 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:35:52.301060 4040163 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:35:52.301201 4040163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:35:52.389113 4040163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-18 23:35:52.378199717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:35:52.389215 4040163 docker.go:295] overlay module found
	I1218 23:35:52.391957 4040163 out.go:177] * Using the docker driver based on existing profile
	I1218 23:35:52.394109 4040163 start.go:298] selected driver: docker
	I1218 23:35:52.394127 4040163 start.go:902] validating driver "docker" against &{Name:functional-773431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-773431 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:35:52.394236 4040163 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:35:52.397023 4040163 out.go:177] 
	W1218 23:35:52.399638 4040163 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 23:35:52.401790 4040163 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-773431 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-773431 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-773431 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (276.306742ms)

                                                
                                                
-- stdout --
	* [functional-773431] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:35:52.020078 4040085 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:35:52.020344 4040085 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:35:52.020377 4040085 out.go:309] Setting ErrFile to fd 2...
	I1218 23:35:52.020400 4040085 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:35:52.020830 4040085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:35:52.021352 4040085 out.go:303] Setting JSON to false
	I1218 23:35:52.022444 4040085 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":199095,"bootTime":1702743457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1218 23:35:52.022657 4040085 start.go:138] virtualization:  
	I1218 23:35:52.026375 4040085 out.go:177] * [functional-773431] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1218 23:35:52.028941 4040085 notify.go:220] Checking for updates...
	I1218 23:35:52.029674 4040085 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:35:52.031790 4040085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:35:52.033607 4040085 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1218 23:35:52.035340 4040085 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1218 23:35:52.037378 4040085 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:35:52.038945 4040085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:35:52.041159 4040085 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:35:52.041877 4040085 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:35:52.070495 4040085 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:35:52.070633 4040085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:35:52.158747 4040085 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-18 23:35:52.148435514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:35:52.158856 4040085 docker.go:295] overlay module found
	I1218 23:35:52.161995 4040085 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1218 23:35:52.163670 4040085 start.go:298] selected driver: docker
	I1218 23:35:52.163689 4040085 start.go:902] validating driver "docker" against &{Name:functional-773431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-773431 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:35:52.163792 4040085 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:35:52.166099 4040085 out.go:177] 
	W1218 23:35:52.168087 4040085 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 23:35:52.170060 4040085 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-773431 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-773431 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-2cr42" [bdde6daa-e62f-4134-a868-601838bb4fa6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-2cr42" [bdde6daa-e62f-4134-a868-601838bb4fa6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003971526s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31077
functional_test.go:1674: http://192.168.49.2:31077: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-2cr42

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31077
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5260b4b5-738f-4a07-a09e-969388076737] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004025382s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-773431 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-773431 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-773431 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773431 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [19b9c9b7-060b-4e3d-93eb-b7ec493e6641] Pending
helpers_test.go:344: "sp-pod" [19b9c9b7-060b-4e3d-93eb-b7ec493e6641] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [19b9c9b7-060b-4e3d-93eb-b7ec493e6641] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003952798s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-773431 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-773431 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-773431 delete -f testdata/storage-provisioner/pod.yaml: (1.581984194s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773431 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92b9d308-3aab-4d3b-9754-d17dd66eb189] Pending
helpers_test.go:344: "sp-pod" [92b9d308-3aab-4d3b-9754-d17dd66eb189] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004210774s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-773431 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh -n functional-773431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cp functional-773431:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1137555587/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh -n functional-773431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh -n functional-773431 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/4009779/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /etc/test/nested/copy/4009779/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/4009779.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /etc/ssl/certs/4009779.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/4009779.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /usr/share/ca-certificates/4009779.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/40097792.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /etc/ssl/certs/40097792.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/40097792.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /usr/share/ca-certificates/40097792.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-773431 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh "sudo systemctl is-active docker": exit status 1 (325.80867ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh "sudo systemctl is-active crio": exit status 1 (313.965628ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-773431 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-773431 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-773431 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-773431 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4037816: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-773431 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-773431 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e4976ddd-efff-43ac-b7fa-68498c9e9412] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e4976ddd-efff-43ac-b7fa-68498c9e9412] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003983007s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-773431 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.206.142 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-773431 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-773431 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-773431 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-dgw29" [c9b84f84-02ab-47e5-9b81-e8589c82a40e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-dgw29" [c9b84f84-02ab-47e5-9b81-e8589c82a40e] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004770955s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 service list -o json
functional_test.go:1493: Took "682.81054ms" to run "out/minikube-linux-arm64 -p functional-773431 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "491.684365ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "74.01597ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "485.211491ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "88.655383ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30405
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdany-port2850915420/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702942548852606459" to /tmp/TestFunctionalparallelMountCmdany-port2850915420/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702942548852606459" to /tmp/TestFunctionalparallelMountCmdany-port2850915420/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702942548852606459" to /tmp/TestFunctionalparallelMountCmdany-port2850915420/001/test-1702942548852606459
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (636.76284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 23:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 23:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 23:35 test-1702942548852606459
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh cat /mount-9p/test-1702942548852606459
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-773431 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [61948fad-f91c-4118-969b-4dc1f489885a] Pending
helpers_test.go:344: "busybox-mount" [61948fad-f91c-4118-969b-4dc1f489885a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [61948fad-f91c-4118-969b-4dc1f489885a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [61948fad-f91c-4118-969b-4dc1f489885a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00357084s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-773431 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdany-port2850915420/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30405
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdspecific-port855075206/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (597.141843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdspecific-port855075206/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh "sudo umount -f /mount-9p": exit status 1 (375.748511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-773431 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdspecific-port855075206/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1628371002/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1628371002/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1628371002/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T" /mount1: (1.331145737s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-773431 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1628371002/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1628371002/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-773431 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1628371002/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 version -o=json --components: (1.352777715s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-773431 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-773431
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-773431 image ls --format short --alsologtostderr:
I1218 23:36:21.759783 4042585 out.go:296] Setting OutFile to fd 1 ...
I1218 23:36:21.759982 4042585 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:21.759992 4042585 out.go:309] Setting ErrFile to fd 2...
I1218 23:36:21.759999 4042585 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:21.760268 4042585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
I1218 23:36:21.764660 4042585 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:21.764816 4042585 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:21.765447 4042585 cli_runner.go:164] Run: docker container inspect functional-773431 --format={{.State.Status}}
I1218 23:36:21.788224 4042585 ssh_runner.go:195] Run: systemctl --version
I1218 23:36:21.788281 4042585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773431
I1218 23:36:21.815275 4042585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42686 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/functional-773431/id_rsa Username:docker}
I1218 23:36:21.920042 4042585 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-773431 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:f09fc9 | 17.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:5628e5 | 67.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-773431  | sha256:243bf7 | 1.01kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-773431 image ls --format table --alsologtostderr:
I1218 23:36:22.100393 4042646 out.go:296] Setting OutFile to fd 1 ...
I1218 23:36:22.100544 4042646 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:22.100552 4042646 out.go:309] Setting ErrFile to fd 2...
I1218 23:36:22.100557 4042646 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:22.100852 4042646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
I1218 23:36:22.101602 4042646 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:22.101740 4042646 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:22.102297 4042646 cli_runner.go:164] Run: docker container inspect functional-773431 --format={{.State.Status}}
I1218 23:36:22.129097 4042646 ssh_runner.go:195] Run: systemctl --version
I1218 23:36:22.129205 4042646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773431
I1218 23:36:22.152075 4042646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42686 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/functional-773431/id_rsa Username:docker}
I1218 23:36:22.262932 4042646 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-773431 image ls --format json --alsologtostderr:
[{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"67241575"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffd
e0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":["docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17606180"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["reg
istry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e2476
85243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:243bf77026c3849f53573a0f8c9ce5cba439b4c323e15d8821e477aa70d1b076","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-773431"],"size":"1006"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-773431 image ls --format json --alsologtostderr:
I1218 23:36:22.076790 4042641 out.go:296] Setting OutFile to fd 1 ...
I1218 23:36:22.077087 4042641 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:22.077119 4042641 out.go:309] Setting ErrFile to fd 2...
I1218 23:36:22.077141 4042641 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:22.077453 4042641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
I1218 23:36:22.078293 4042641 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:22.078507 4042641 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:22.079105 4042641 cli_runner.go:164] Run: docker container inspect functional-773431 --format={{.State.Status}}
I1218 23:36:22.104753 4042641 ssh_runner.go:195] Run: systemctl --version
I1218 23:36:22.104806 4042641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773431
I1218 23:36:22.130221 4042641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42686 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/functional-773431/id_rsa Username:docker}
I1218 23:36:22.234665 4042641 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-773431 image ls --format yaml --alsologtostderr:
- id: sha256:5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
repoTags:
- docker.io/library/nginx:latest
size: "67241575"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests:
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
repoTags:
- docker.io/library/nginx:alpine
size: "17606180"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:243bf77026c3849f53573a0f8c9ce5cba439b4c323e15d8821e477aa70d1b076
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-773431
size: "1006"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-773431 image ls --format yaml --alsologtostderr:
I1218 23:36:21.744052 4042584 out.go:296] Setting OutFile to fd 1 ...
I1218 23:36:21.744343 4042584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:21.744374 4042584 out.go:309] Setting ErrFile to fd 2...
I1218 23:36:21.744395 4042584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:21.744710 4042584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
I1218 23:36:21.745489 4042584 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:21.745710 4042584 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:21.746425 4042584 cli_runner.go:164] Run: docker container inspect functional-773431 --format={{.State.Status}}
I1218 23:36:21.772996 4042584 ssh_runner.go:195] Run: systemctl --version
I1218 23:36:21.773051 4042584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773431
I1218 23:36:21.808986 4042584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42686 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/functional-773431/id_rsa Username:docker}
I1218 23:36:21.916057 4042584 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-773431 ssh pgrep buildkitd: exit status 1 (336.353359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image build -t localhost/my-image:functional-773431 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-773431 image build -t localhost/my-image:functional-773431 testdata/build --alsologtostderr: (2.158109311s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-773431 image build -t localhost/my-image:functional-773431 testdata/build --alsologtostderr:
I1218 23:36:22.701813 4042747 out.go:296] Setting OutFile to fd 1 ...
I1218 23:36:22.703194 4042747 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:22.703208 4042747 out.go:309] Setting ErrFile to fd 2...
I1218 23:36:22.703215 4042747 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:36:22.703494 4042747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
I1218 23:36:22.704224 4042747 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:22.705964 4042747 config.go:182] Loaded profile config "functional-773431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 23:36:22.706691 4042747 cli_runner.go:164] Run: docker container inspect functional-773431 --format={{.State.Status}}
I1218 23:36:22.726207 4042747 ssh_runner.go:195] Run: systemctl --version
I1218 23:36:22.726259 4042747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773431
I1218 23:36:22.744959 4042747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42686 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/functional-773431/id_rsa Username:docker}
I1218 23:36:22.846804 4042747 build_images.go:151] Building image from path: /tmp/build.2329825487.tar
I1218 23:36:22.846893 4042747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 23:36:22.858244 4042747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2329825487.tar
I1218 23:36:22.862840 4042747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2329825487.tar: stat -c "%s %y" /var/lib/minikube/build/build.2329825487.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2329825487.tar': No such file or directory
I1218 23:36:22.862874 4042747 ssh_runner.go:362] scp /tmp/build.2329825487.tar --> /var/lib/minikube/build/build.2329825487.tar (3072 bytes)
I1218 23:36:22.894204 4042747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2329825487
I1218 23:36:22.905521 4042747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2329825487 -xf /var/lib/minikube/build/build.2329825487.tar
I1218 23:36:22.917640 4042747 containerd.go:378] Building image: /var/lib/minikube/build/build.2329825487
I1218 23:36:22.917730 4042747 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2329825487 --local dockerfile=/var/lib/minikube/build/build.2329825487 --output type=image,name=localhost/my-image:functional-773431
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:056cf109a9e53e60729b0993c1329d95f031275f3a06236907b5e45c4f648662
#8 exporting manifest sha256:056cf109a9e53e60729b0993c1329d95f031275f3a06236907b5e45c4f648662 0.0s done
#8 exporting config sha256:8e7c97c4569756f10921b25fdf98f641083401333ad8263556920119e25a2d67 0.0s done
#8 naming to localhost/my-image:functional-773431 done
#8 DONE 0.1s
I1218 23:36:24.759751 4042747 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2329825487 --local dockerfile=/var/lib/minikube/build/build.2329825487 --output type=image,name=localhost/my-image:functional-773431: (1.841989844s)
I1218 23:36:24.759839 4042747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2329825487
I1218 23:36:24.770947 4042747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2329825487.tar
I1218 23:36:24.781695 4042747 build_images.go:207] Built localhost/my-image:functional-773431 from /tmp/build.2329825487.tar
I1218 23:36:24.781726 4042747 build_images.go:123] succeeded building to: functional-773431
I1218 23:36:24.781732 4042747 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/12/18 23:36:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.482248535s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-773431
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image rm gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-773431
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-773431 image save --daemon gcr.io/google-containers/addon-resizer:functional-773431 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-773431
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-773431
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-773431
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-773431
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (79.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-909642 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1218 23:36:39.695063 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-909642 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m19.575683047s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (79.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons enable ingress --alsologtostderr -v=5: (10.035984455s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-909642 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-313377 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1218 23:38:55.848410 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:39:23.535526 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:40:19.853828 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:19.859113 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:19.869429 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:19.889673 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:19.930023 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:20.011728 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:20.172220 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-313377 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m29.769973279s)
--- PASS: TestJSONOutput/start/Command (89.77s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-313377 --output=json --user=testUser
E1218 23:40:20.493909 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:21.134850 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-313377 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-313377 --output=json --user=testUser
E1218 23:40:22.415739 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:40:24.976614 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-313377 --output=json --user=testUser: (5.870353035s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-252864 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-252864 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.293243ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b466072-e792-4158-9811-1732ecd05005","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-252864] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa85e24e-f86e-4635-957c-b8b5cf53357d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"98b00704-db30-411f-b4ab-e06a3db43b7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9bc375ae-b277-4d5a-b058-b8728b7f12e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig"}}
	{"specversion":"1.0","id":"827b5d1b-3b78-4dee-bd86-2e57fe78b830","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube"}}
	{"specversion":"1.0","id":"5a90650c-7872-4646-9fc5-0a1d70abfbff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1f160937-13b4-48d6-bfaa-fe500a7008d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e150c2f6-18e9-4e4f-8f58-ac100ea39a78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-252864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-252864
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-187871 --network=
E1218 23:40:40.337152 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:41:00.817386 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-187871 --network=: (40.214668679s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-187871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-187871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-187871: (2.142029028s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.38s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-143010 --network=bridge
E1218 23:41:41.777603 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-143010 --network=bridge: (34.044712558s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-143010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-143010
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-143010: (2.072944438s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.14s)

                                                
                                    
x
+
TestKicExistingNetwork (36.87s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-809390 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-809390 --network=existing-network: (34.731435336s)
helpers_test.go:175: Cleaning up "existing-network-809390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-809390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-809390: (1.97302502s)
--- PASS: TestKicExistingNetwork (36.87s)

                                                
                                    
x
+
TestKicCustomSubnet (34.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-569256 --subnet=192.168.60.0/24
E1218 23:42:58.409809 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:58.415060 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:58.425275 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:58.445530 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:58.485789 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:58.566035 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:58.726314 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:59.046595 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:42:59.687373 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:43:00.967926 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-569256 --subnet=192.168.60.0/24: (32.162114478s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-569256 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-569256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-569256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-569256: (2.201675165s)
--- PASS: TestKicCustomSubnet (34.39s)

                                                
                                    
x
+
TestKicStaticIP (39.57s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-592706 --static-ip=192.168.200.200
E1218 23:43:03.528852 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:43:03.698588 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1218 23:43:08.651825 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:43:18.892769 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:43:39.373918 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-592706 --static-ip=192.168.200.200: (37.19211871s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-592706 ip
helpers_test.go:175: Cleaning up "static-ip-592706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-592706
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-592706: (2.193958519s)
--- PASS: TestKicStaticIP (39.57s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (70.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-628925 --driver=docker  --container-runtime=containerd
E1218 23:43:55.848111 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-628925 --driver=docker  --container-runtime=containerd: (33.059931397s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-631665 --driver=docker  --container-runtime=containerd
E1218 23:44:20.334132 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-631665 --driver=docker  --container-runtime=containerd: (32.27236022s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-628925
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-631665
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-631665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-631665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-631665: (2.015119773s)
helpers_test.go:175: Cleaning up "first-628925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-628925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-628925: (2.001028747s)
--- PASS: TestMinikubeProfile (70.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-670295 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-670295 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.490628604s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-670295 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-672239 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-672239 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.047711519s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-672239 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-670295 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-670295 --alsologtostderr -v=5: (1.69369818s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-672239 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-672239
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-672239: (1.235083906s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-672239
E1218 23:45:19.854798 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-672239: (6.783026346s)
--- PASS: TestMountStart/serial/RestartStopped (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-672239 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270938 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1218 23:45:42.255067 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:45:47.538936 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270938 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.563723335s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-270938 -- rollout status deployment/busybox: (2.976271771s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-6tgv4 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-dqnm7 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-6tgv4 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-dqnm7 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-6tgv4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-dqnm7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-6tgv4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-6tgv4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-dqnm7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270938 -- exec busybox-5bc68d56bd-dqnm7 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.13s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-270938 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-270938 -v 3 --alsologtostderr: (16.007239783s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.78s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-270938 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp testdata/cp-test.txt multinode-270938:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4148405394/001/cp-test_multinode-270938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938:/home/docker/cp-test.txt multinode-270938-m02:/home/docker/cp-test_multinode-270938_multinode-270938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m02 "sudo cat /home/docker/cp-test_multinode-270938_multinode-270938-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938:/home/docker/cp-test.txt multinode-270938-m03:/home/docker/cp-test_multinode-270938_multinode-270938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m03 "sudo cat /home/docker/cp-test_multinode-270938_multinode-270938-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp testdata/cp-test.txt multinode-270938-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4148405394/001/cp-test_multinode-270938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938-m02:/home/docker/cp-test.txt multinode-270938:/home/docker/cp-test_multinode-270938-m02_multinode-270938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938 "sudo cat /home/docker/cp-test_multinode-270938-m02_multinode-270938.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938-m02:/home/docker/cp-test.txt multinode-270938-m03:/home/docker/cp-test_multinode-270938-m02_multinode-270938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m03 "sudo cat /home/docker/cp-test_multinode-270938-m02_multinode-270938-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp testdata/cp-test.txt multinode-270938-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4148405394/001/cp-test_multinode-270938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938-m03:/home/docker/cp-test.txt multinode-270938:/home/docker/cp-test_multinode-270938-m03_multinode-270938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938 "sudo cat /home/docker/cp-test_multinode-270938-m03_multinode-270938.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 cp multinode-270938-m03:/home/docker/cp-test.txt multinode-270938-m02:/home/docker/cp-test_multinode-270938-m03_multinode-270938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 ssh -n multinode-270938-m02 "sudo cat /home/docker/cp-test_multinode-270938-m03_multinode-270938-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-270938 node stop m03: (1.229360345s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270938 status: exit status 7 (609.744696ms)

                                                
                                                
-- stdout --
	multinode-270938
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270938-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-270938-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr: exit status 7 (599.626621ms)

                                                
                                                
-- stdout --
	multinode-270938
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270938-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-270938-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:47:18.482583 4089793 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:47:18.482751 4089793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:47:18.482762 4089793 out.go:309] Setting ErrFile to fd 2...
	I1218 23:47:18.482768 4089793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:47:18.483027 4089793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:47:18.483204 4089793 out.go:303] Setting JSON to false
	I1218 23:47:18.483280 4089793 mustload.go:65] Loading cluster: multinode-270938
	I1218 23:47:18.483358 4089793 notify.go:220] Checking for updates...
	I1218 23:47:18.484840 4089793 config.go:182] Loaded profile config "multinode-270938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:47:18.484885 4089793 status.go:255] checking status of multinode-270938 ...
	I1218 23:47:18.486163 4089793 cli_runner.go:164] Run: docker container inspect multinode-270938 --format={{.State.Status}}
	I1218 23:47:18.508115 4089793 status.go:330] multinode-270938 host status = "Running" (err=<nil>)
	I1218 23:47:18.508174 4089793 host.go:66] Checking if "multinode-270938" exists ...
	I1218 23:47:18.508518 4089793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270938
	I1218 23:47:18.527668 4089793 host.go:66] Checking if "multinode-270938" exists ...
	I1218 23:47:18.528005 4089793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:47:18.528057 4089793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270938
	I1218 23:47:18.554789 4089793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42751 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/multinode-270938/id_rsa Username:docker}
	I1218 23:47:18.660115 4089793 ssh_runner.go:195] Run: systemctl --version
	I1218 23:47:18.666023 4089793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:47:18.682313 4089793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:47:18.757695 4089793 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-18 23:47:18.747622746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:47:18.758295 4089793 kubeconfig.go:92] found "multinode-270938" server: "https://192.168.58.2:8443"
	I1218 23:47:18.758319 4089793 api_server.go:166] Checking apiserver status ...
	I1218 23:47:18.758362 4089793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:47:18.771913 4089793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1293/cgroup
	I1218 23:47:18.783718 4089793 api_server.go:182] apiserver freezer: "6:freezer:/docker/c8e5f19be2d0e9bcee9541229f5127cc08baf4d2ba41c32dc1866906315a8f89/kubepods/burstable/pod8bacf45a20d29ee9574a8eecd0a4d92c/3ac77d2d24b08d34a0a4fe30b3ebd28ff7949b1f2a6ee195d67bd69c55638321"
	I1218 23:47:18.783821 4089793 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c8e5f19be2d0e9bcee9541229f5127cc08baf4d2ba41c32dc1866906315a8f89/kubepods/burstable/pod8bacf45a20d29ee9574a8eecd0a4d92c/3ac77d2d24b08d34a0a4fe30b3ebd28ff7949b1f2a6ee195d67bd69c55638321/freezer.state
	I1218 23:47:18.794965 4089793 api_server.go:204] freezer state: "THAWED"
	I1218 23:47:18.794998 4089793 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1218 23:47:18.803870 4089793 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1218 23:47:18.803902 4089793 status.go:421] multinode-270938 apiserver status = Running (err=<nil>)
	I1218 23:47:18.803928 4089793 status.go:257] multinode-270938 status: &{Name:multinode-270938 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:47:18.803952 4089793 status.go:255] checking status of multinode-270938-m02 ...
	I1218 23:47:18.804281 4089793 cli_runner.go:164] Run: docker container inspect multinode-270938-m02 --format={{.State.Status}}
	I1218 23:47:18.822377 4089793 status.go:330] multinode-270938-m02 host status = "Running" (err=<nil>)
	I1218 23:47:18.822415 4089793 host.go:66] Checking if "multinode-270938-m02" exists ...
	I1218 23:47:18.822715 4089793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270938-m02
	I1218 23:47:18.844656 4089793 host.go:66] Checking if "multinode-270938-m02" exists ...
	I1218 23:47:18.845001 4089793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:47:18.845050 4089793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270938-m02
	I1218 23:47:18.863511 4089793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42756 SSHKeyPath:/home/jenkins/minikube-integration/17822-4004447/.minikube/machines/multinode-270938-m02/id_rsa Username:docker}
	I1218 23:47:18.963546 4089793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:47:18.977174 4089793 status.go:257] multinode-270938-m02 status: &{Name:multinode-270938-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:47:18.977255 4089793 status.go:255] checking status of multinode-270938-m03 ...
	I1218 23:47:18.977582 4089793 cli_runner.go:164] Run: docker container inspect multinode-270938-m03 --format={{.State.Status}}
	I1218 23:47:19.001680 4089793 status.go:330] multinode-270938-m03 host status = "Stopped" (err=<nil>)
	I1218 23:47:19.001707 4089793 status.go:343] host is not running, skipping remaining checks
	I1218 23:47:19.001715 4089793 status.go:257] multinode-270938-m03 status: &{Name:multinode-270938-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-270938 node start m03 --alsologtostderr: (11.38657127s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-270938
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-270938
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-270938: (25.209517084s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270938 --wait=true -v=8 --alsologtostderr
E1218 23:47:58.410446 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:48:26.096163 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1218 23:48:55.848626 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270938 --wait=true -v=8 --alsologtostderr: (1m34.174749753s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-270938
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-270938 node delete m03: (4.450948567s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-270938 stop: (24.202880355s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270938 status: exit status 7 (159.263312ms)

                                                
                                                
-- stdout --
	multinode-270938
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-270938-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr: exit status 7 (117.315448ms)

                                                
                                                
-- stdout --
	multinode-270938
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-270938-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:50:00.523209 4098550 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:50:00.523438 4098550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:50:00.523453 4098550 out.go:309] Setting ErrFile to fd 2...
	I1218 23:50:00.523461 4098550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:50:00.523771 4098550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1218 23:50:00.524012 4098550 out.go:303] Setting JSON to false
	I1218 23:50:00.524135 4098550 mustload.go:65] Loading cluster: multinode-270938
	I1218 23:50:00.524246 4098550 notify.go:220] Checking for updates...
	I1218 23:50:00.524646 4098550 config.go:182] Loaded profile config "multinode-270938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:50:00.524661 4098550 status.go:255] checking status of multinode-270938 ...
	I1218 23:50:00.525243 4098550 cli_runner.go:164] Run: docker container inspect multinode-270938 --format={{.State.Status}}
	I1218 23:50:00.548064 4098550 status.go:330] multinode-270938 host status = "Stopped" (err=<nil>)
	I1218 23:50:00.548110 4098550 status.go:343] host is not running, skipping remaining checks
	I1218 23:50:00.548128 4098550 status.go:257] multinode-270938 status: &{Name:multinode-270938 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:50:00.548238 4098550 status.go:255] checking status of multinode-270938-m02 ...
	I1218 23:50:00.548577 4098550 cli_runner.go:164] Run: docker container inspect multinode-270938-m02 --format={{.State.Status}}
	I1218 23:50:00.568112 4098550 status.go:330] multinode-270938-m02 host status = "Stopped" (err=<nil>)
	I1218 23:50:00.568142 4098550 status.go:343] host is not running, skipping remaining checks
	I1218 23:50:00.568151 4098550 status.go:257] multinode-270938-m02 status: &{Name:multinode-270938-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270938 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1218 23:50:18.896576 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1218 23:50:19.854308 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270938 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.950211456s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270938 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-270938
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270938-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-270938-m02 --driver=docker  --container-runtime=containerd: exit status 14 (137.596568ms)

                                                
                                                
-- stdout --
	* [multinode-270938-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-270938-m02' is duplicated with machine name 'multinode-270938-m02' in profile 'multinode-270938'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270938-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270938-m03 --driver=docker  --container-runtime=containerd: (31.810425708s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-270938
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-270938: exit status 80 (380.92199ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-270938
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-270938-m03 already exists in multinode-270938-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-270938-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-270938-m03: (2.003704767s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.43s)

                                                
                                    
x
+
TestPreload (146.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-193412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1218 23:52:58.409929 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-193412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.924994597s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-193412 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-193412 image pull gcr.io/k8s-minikube/busybox: (1.330686597s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-193412
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-193412: (12.030648527s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-193412 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1218 23:53:55.848630 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-193412 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.391450537s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-193412 image list
helpers_test.go:175: Cleaning up "test-preload-193412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-193412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-193412: (2.399712855s)
--- PASS: TestPreload (146.34s)

                                                
                                    
x
+
TestScheduledStopUnix (108.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-980893 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-980893 --memory=2048 --driver=docker  --container-runtime=containerd: (31.799932075s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980893 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-980893 -n scheduled-stop-980893
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980893 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980893 --cancel-scheduled
E1218 23:55:19.854638 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-980893 -n scheduled-stop-980893
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-980893
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980893 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-980893
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-980893: exit status 7 (89.122358ms)

                                                
                                                
-- stdout --
	scheduled-stop-980893
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-980893 -n scheduled-stop-980893
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-980893 -n scheduled-stop-980893: exit status 7 (89.073891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-980893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-980893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-980893: (4.413766967s)
--- PASS: TestScheduledStopUnix (108.16s)

                                                
                                    
x
+
TestInsufficientStorage (10.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-120081 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-120081 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.586878059s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"278b0a0b-6f27-4145-b3ec-d1e854935fa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-120081] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c0a4804-f586-4c78-acd9-d62fcd0b6519","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"6c89ec5e-8b37-4b23-baf8-a6aba9f0ecf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d6c36a8-2adb-409f-9383-4245f9adee4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig"}}
	{"specversion":"1.0","id":"7813098a-f5fc-42d6-9dfe-63709819278b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube"}}
	{"specversion":"1.0","id":"fd3cd363-e1d1-4e4a-86cb-1fcdebb6dfa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"25d09473-8f2d-4b85-aed7-217de9c969e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82c24396-1e76-456b-869f-8403ec8f699b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f323c6c3-9ece-4018-be5f-0d7d5f75546f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c44bdc5d-34cd-4f8e-b43c-5d1ddee1aa68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a1e82ce-59b1-4eb6-83af-0f7cf5a23f43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ad25a60a-8c8d-4abc-b3fc-652c669b31d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-120081 in cluster insufficient-storage-120081","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7d84020-5cad-40bc-91b6-8c88fc524eed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702920864-17822 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c50d27d-8d46-4113-937e-37cf74733f9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bf57cd0-59bf-44b7-a926-a1e460823cb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-120081 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-120081 --output=json --layout=cluster: exit status 7 (339.966931ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-120081","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-120081","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 23:56:21.227953 4115782 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-120081" does not appear in /home/jenkins/minikube-integration/17822-4004447/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-120081 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-120081 --output=json --layout=cluster: exit status 7 (340.06718ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-120081","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-120081","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 23:56:21.572088 4115835 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-120081" does not appear in /home/jenkins/minikube-integration/17822-4004447/kubeconfig
	E1218 23:56:21.584356 4115835 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/insufficient-storage-120081/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-120081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-120081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-120081: (1.925877009s)
--- PASS: TestInsufficientStorage (10.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.3908525861.exe start -p running-upgrade-119531 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.3908525861.exe start -p running-upgrade-119531 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.884167063s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-119531 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-119531 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.377299846s)
helpers_test.go:175: Cleaning up "running-upgrade-119531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-119531
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-119531: (3.044334919s)
--- PASS: TestRunningBinaryUpgrade (90.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (402.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1218 23:58:55.848608 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.71933194s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-065234
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-065234: (1.516810791s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-065234 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-065234 status --format={{.Host}}: exit status 7 (96.365204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m53.502008662s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-065234 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (139.307543ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-065234] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-065234
	    minikube start -p kubernetes-upgrade-065234 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0652342 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-065234 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-065234 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.824055657s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-065234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-065234
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-065234: (2.472046364s)
--- PASS: TestKubernetesUpgrade (402.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.3779947800.exe start -p missing-upgrade-122483 --memory=2200 --driver=docker  --container-runtime=containerd
E1218 23:56:42.899148 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.3779947800.exe start -p missing-upgrade-122483 --memory=2200 --driver=docker  --container-runtime=containerd: (1m25.634174384s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-122483
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-122483: (10.330210228s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-122483
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-122483 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-122483 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m9.662658976s)
helpers_test.go:175: Cleaning up "missing-upgrade-122483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-122483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-122483: (2.482457125s)
--- PASS: TestMissingContainerUpgrade (169.28s)

                                                
                                    
x
+
TestPause/serial/Start (91.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-078174 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-078174 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m31.366928487s)
--- PASS: TestPause/serial/Start (91.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-078174 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1218 23:57:58.409703 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-078174 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.329278084s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-078174 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-078174 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-078174 --output=json --layout=cluster: exit status 2 (397.877891ms)

                                                
                                                
-- stdout --
	{"Name":"pause-078174","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-078174","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-078174 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-078174 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.62s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-078174 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-078174 --alsologtostderr -v=5: (2.620444747s)
--- PASS: TestPause/serial/DeletePaused (2.62s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-078174
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-078174: exit status 1 (17.635764ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-078174: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.2381007120.exe start -p stopped-upgrade-917215 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1218 23:59:21.456983 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.2381007120.exe start -p stopped-upgrade-917215 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.078517569s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.2381007120.exe -p stopped-upgrade-917215 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.2381007120.exe -p stopped-upgrade-917215 stop: (20.062790482s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-917215 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1219 00:00:19.854030 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-917215 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.824272662s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-917215
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-917215: (1.437333929s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-208080 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-208080 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (104.137243ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-208080] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-208080 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-208080 --driver=docker  --container-runtime=containerd: (31.783721588s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-208080 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-208080 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-208080 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.807229844s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-208080 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-208080 status -o json: exit status 2 (448.555962ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-208080","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-208080
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-208080: (2.077615561s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-208080 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-208080 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.356609472s)
--- PASS: TestNoKubernetes/serial/Start (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-208080 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-208080 "sudo systemctl is-active --quiet service kubelet": exit status 1 (335.583474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-208080
E1219 00:03:55.848366 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-208080: (1.338392864s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-208080 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-208080 --driver=docker  --container-runtime=containerd: (7.998402062s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-208080 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-208080 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.191983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-224762 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-224762 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (223.772893ms)

                                                
                                                
-- stdout --
	* [false-224762] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 00:04:13.274262 4154636 out.go:296] Setting OutFile to fd 1 ...
	I1219 00:04:13.274418 4154636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:04:13.274427 4154636 out.go:309] Setting ErrFile to fd 2...
	I1219 00:04:13.274433 4154636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:04:13.274655 4154636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-4004447/.minikube/bin
	I1219 00:04:13.275145 4154636 out.go:303] Setting JSON to false
	I1219 00:04:13.276179 4154636 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":200797,"bootTime":1702743457,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1219 00:04:13.276258 4154636 start.go:138] virtualization:  
	I1219 00:04:13.280514 4154636 out.go:177] * [false-224762] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1219 00:04:13.282730 4154636 out.go:177]   - MINIKUBE_LOCATION=17822
	I1219 00:04:13.285150 4154636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 00:04:13.282940 4154636 notify.go:220] Checking for updates...
	I1219 00:04:13.290705 4154636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-4004447/kubeconfig
	I1219 00:04:13.292615 4154636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-4004447/.minikube
	I1219 00:04:13.294732 4154636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1219 00:04:13.297123 4154636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 00:04:13.299527 4154636 config.go:182] Loaded profile config "kubernetes-upgrade-065234": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I1219 00:04:13.299690 4154636 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 00:04:13.327518 4154636 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1219 00:04:13.327658 4154636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:04:13.418477 4154636 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-19 00:04:13.408609327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:04:13.418585 4154636 docker.go:295] overlay module found
	I1219 00:04:13.421611 4154636 out.go:177] * Using the docker driver based on user configuration
	I1219 00:04:13.423570 4154636 start.go:298] selected driver: docker
	I1219 00:04:13.423591 4154636 start.go:902] validating driver "docker" against <nil>
	I1219 00:04:13.423604 4154636 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 00:04:13.426079 4154636 out.go:177] 
	W1219 00:04:13.428320 4154636 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1219 00:04:13.430376 4154636 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-224762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-224762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:04:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-065234
contexts:
- context:
cluster: kubernetes-upgrade-065234
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:04:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-065234
name: kubernetes-upgrade-065234
current-context: kubernetes-upgrade-065234
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-065234
user:
client-certificate: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/kubernetes-upgrade-065234/client.crt
client-key: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/kubernetes-upgrade-065234/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-224762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224762"

                                                
                                                
----------------------- debugLogs end: false-224762 [took: 5.336321481s] --------------------------------
helpers_test.go:175: Cleaning up "false-224762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-224762
--- PASS: TestNetworkPlugins/group/false (5.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-753973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1219 00:06:58.897385 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1219 00:07:58.410392 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-753973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m10.732540138s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-714134 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-714134 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m14.909578519s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-753973 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a632210c-930b-4956-9d23-23776f87e34a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a632210c-930b-4956-9d23-23776f87e34a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002871507s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-753973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-753973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-753973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.598175497s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-753973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-753973 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-753973 --alsologtostderr -v=3: (13.14450411s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-753973 -n old-k8s-version-753973
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-753973 -n old-k8s-version-753973: exit status 7 (116.45153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-753973 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (666.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-753973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1219 00:08:55.848647 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-753973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m5.595718597s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-753973 -n old-k8s-version-753973
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (666.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-714134 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dcb4dd55-594c-430d-8b6f-21bb301b0767] Pending
helpers_test.go:344: "busybox" [dcb4dd55-594c-430d-8b6f-21bb301b0767] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dcb4dd55-594c-430d-8b6f-21bb301b0767] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004259374s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-714134 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-714134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-714134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200880964s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-714134 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-714134 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-714134 --alsologtostderr -v=3: (12.175554886s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-714134 -n no-preload-714134
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-714134 -n no-preload-714134: exit status 7 (91.507935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-714134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (345.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-714134 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1219 00:10:19.853819 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1219 00:12:58.410367 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1219 00:13:22.900243 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1219 00:13:55.848510 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1219 00:15:19.853965 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-714134 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m45.134286893s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-714134 -n no-preload-714134
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (345.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jjr4w" [125a3c03-4316-4ab0-8889-3dbb359a6db2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jjr4w" [125a3c03-4316-4ab0-8889-3dbb359a6db2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.0040921s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jjr4w" [125a3c03-4316-4ab0-8889-3dbb359a6db2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003766083s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-714134 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-714134 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-714134 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-714134 -n no-preload-714134
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-714134 -n no-preload-714134: exit status 2 (386.881674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-714134 -n no-preload-714134
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-714134 -n no-preload-714134: exit status 2 (378.597726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-714134 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-714134 -n no-preload-714134
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-714134 -n no-preload-714134
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-797704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-797704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m1.773714158s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-797704 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [21207121-76d4-41ad-82e4-1d49812ef3b3] Pending
helpers_test.go:344: "busybox" [21207121-76d4-41ad-82e4-1d49812ef3b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [21207121-76d4-41ad-82e4-1d49812ef3b3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004210975s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-797704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-797704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-797704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.156015411s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-797704 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-797704 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-797704 --alsologtostderr -v=3: (12.10147845s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-797704 -n embed-certs-797704
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-797704 -n embed-certs-797704: exit status 7 (86.949556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-797704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (341.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-797704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1219 00:17:58.410466 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1219 00:18:55.848072 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1219 00:19:27.098685 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.104002 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.114296 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.134612 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.174847 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.255123 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.415637 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:27.736243 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:28.377128 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:29.657353 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:32.217572 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:19:37.337849 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-797704 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m41.435375268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-797704 -n embed-certs-797704
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (341.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hckqg" [fd3a1ba5-c004-436c-9789-c34382bfb134] Running
E1219 00:19:47.578796 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004065609s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hckqg" [fd3a1ba5-c004-436c-9789-c34382bfb134] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005042798s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-753973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-753973 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-753973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-753973 -n old-k8s-version-753973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-753973 -n old-k8s-version-753973: exit status 2 (394.939833ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-753973 -n old-k8s-version-753973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-753973 -n old-k8s-version-753973: exit status 2 (407.572332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-753973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-753973 -n old-k8s-version-753973
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-753973 -n old-k8s-version-753973
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-921452 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1219 00:20:08.059013 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:20:19.854676 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1219 00:20:49.019154 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-921452 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (57.461710266s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-921452 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2fef8723-0fab-4b72-85b2-4c3bab4a4085] Pending
helpers_test.go:344: "busybox" [2fef8723-0fab-4b72-85b2-4c3bab4a4085] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2fef8723-0fab-4b72-85b2-4c3bab4a4085] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003345882s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-921452 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-921452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-921452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.123654084s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-921452 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-921452 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-921452 --alsologtostderr -v=3: (12.198631077s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452: exit status 7 (97.488023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-921452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (345.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-921452 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E1219 00:22:10.939701 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:22:58.409938 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-921452 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m44.950854451s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (345.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fj5jb" [25fa2c8a-2a0a-4b6a-88cd-e3d6cee502af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fj5jb" [25fa2c8a-2a0a-4b6a-88cd-e3d6cee502af] Running
E1219 00:23:13.756578 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:13.761802 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:13.772546 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:13.793241 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:13.833407 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:13.913758 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:14.074007 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:14.394511 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:15.034878 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:16.315949 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004359606s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fj5jb" [25fa2c8a-2a0a-4b6a-88cd-e3d6cee502af] Running
E1219 00:23:18.876215 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004595616s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-797704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-797704 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-797704 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-797704 -n embed-certs-797704
E1219 00:23:23.996536 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-797704 -n embed-certs-797704: exit status 2 (375.845411ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-797704 -n embed-certs-797704
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-797704 -n embed-certs-797704: exit status 2 (367.438278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-797704 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-797704 -n embed-certs-797704
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-797704 -n embed-certs-797704
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-316512 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1219 00:23:34.237355 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:38.898271 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
E1219 00:23:54.717821 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:23:55.848072 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-316512 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (52.858181875s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-316512 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-316512 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.522332741s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-316512 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-316512 --alsologtostderr -v=3: (1.331934678s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-316512 -n newest-cni-316512
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-316512 -n newest-cni-316512: exit status 7 (90.696483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-316512 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-316512 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1219 00:24:27.099462 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
E1219 00:24:35.678277 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:24:54.780077 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-316512 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (31.816703506s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-316512 -n newest-cni-316512
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-316512 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-316512 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-316512 -n newest-cni-316512
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-316512 -n newest-cni-316512: exit status 2 (381.641724ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-316512 -n newest-cni-316512
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-316512 -n newest-cni-316512: exit status 2 (385.64747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-316512 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-316512 --alsologtostderr -v=1: (1.095743837s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-316512 -n newest-cni-316512
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-316512 -n newest-cni-316512
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1219 00:25:19.854636 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1219 00:25:57.598496 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m29.649689546s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zl66b" [4a983ad9-8854-4563-8dd0-1372f408e771] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zl66b" [4a983ad9-8854-4563-8dd0-1372f408e771] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003915972s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rl8bl" [3c410ea6-5bc4-4913-b107-2520e216817c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rl8bl" [3c410ea6-5bc4-4913-b107-2520e216817c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004300436s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m30.573068615s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rl8bl" [3c410ea6-5bc4-4913-b107-2520e216817c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004670022s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-921452 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-921452 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-921452 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-921452 --alsologtostderr -v=1: (1.12318288s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452: exit status 2 (447.899144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452: exit status 2 (468.740894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-921452 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-921452 --alsologtostderr -v=1: (1.010604426s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-921452 -n default-k8s-diff-port-921452
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1219 00:27:58.410065 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/ingress-addon-legacy-909642/client.crt: no such file or directory
E1219 00:28:13.757340 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
E1219 00:28:41.438689 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/old-k8s-version-753973/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m19.221092744s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vvkp5" [7254906f-de7e-425a-9298-e159e84c228a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005242961s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wck2z" [3a9e57e9-647b-405e-ac77-08ee00b29120] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wck2z" [3a9e57e9-647b-405e-ac77-08ee00b29120] Running
E1219 00:28:55.848393 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/addons-505406/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004039034s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d4q2d" [f8a675b7-31b0-4a15-99fa-d81c8ec7646c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005045877s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rhbwh" [ee43ff7b-e805-4d5e-b331-6812b800f0db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rhbwh" [ee43ff7b-e805-4d5e-b331-6812b800f0db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00500791s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1219 00:29:27.099476 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/no-preload-714134/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m10.665924469s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1219 00:30:02.901269 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
E1219 00:30:19.853938 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/functional-773431/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (53.290181652s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8hjtq" [1df6ae1e-e738-4e09-b6ee-d9e1fd97c662] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8hjtq" [1df6ae1e-e738-4e09-b6ee-d9e1fd97c662] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004372184s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-whvxd" [4d9e4f94-a987-423a-97fa-9d60bf04074a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-whvxd" [4d9e4f94-a987-423a-97fa-9d60bf04074a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004383499s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1219 00:31:09.957006 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/default-k8s-diff-port-921452/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.550245657s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1219 00:31:20.197460 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/default-k8s-diff-port-921452/client.crt: no such file or directory
E1219 00:31:34.863646 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:34.868924 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:34.879389 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:34.900096 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:34.940330 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:35.020571 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:35.180939 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:35.501412 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:36.141540 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:37.421929 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:39.982740 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:40.677634 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/default-k8s-diff-port-921452/client.crt: no such file or directory
E1219 00:31:45.103513 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
E1219 00:31:55.344021 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-224762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m28.98040703s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bxn74" [a7d08a02-98a6-479b-82c2-df8c1d825eb5] Running
E1219 00:32:15.825190 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/auto-224762/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00424569s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hjzn9" [6b570769-67b6-4f47-9e23-c6fb3abb5385] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hjzn9" [6b570769-67b6-4f47-9e23-c6fb3abb5385] Running
E1219 00:32:21.638077 4009779 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/default-k8s-diff-port-921452/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004616746s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-224762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-224762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5pj58" [2f933557-366a-478b-9d3f-86675057d9d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5pj58" [2f933557-366a-478b-9d3f-86675057d9d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003755754s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-224762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-224762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
aaa_download_only_test.go:102: No preload image
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.69s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-388185 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-388185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-388185
--- SKIP: TestDownloadOnlyKic (0.69s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-334098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-334098
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-224762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-224762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:59:48 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-065234
contexts:
- context:
cluster: kubernetes-upgrade-065234
user: kubernetes-upgrade-065234
name: kubernetes-upgrade-065234
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-065234
user:
client-certificate: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/kubernetes-upgrade-065234/client.crt
client-key: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/kubernetes-upgrade-065234/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-224762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224762"

                                                
                                                
----------------------- debugLogs end: kubenet-224762 [took: 5.835959486s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-224762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-224762
--- SKIP: TestNetworkPlugins/group/kubenet (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-224762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-224762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-4004447/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:04:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-065234
contexts:
- context:
cluster: kubernetes-upgrade-065234
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:04:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-065234
name: kubernetes-upgrade-065234
current-context: kubernetes-upgrade-065234
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-065234
user:
client-certificate: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/kubernetes-upgrade-065234/client.crt
client-key: /home/jenkins/minikube-integration/17822-4004447/.minikube/profiles/kubernetes-upgrade-065234/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-224762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-224762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224762"

                                                
                                                
----------------------- debugLogs end: cilium-224762 [took: 6.081430189s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-224762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-224762
--- SKIP: TestNetworkPlugins/group/cilium (6.37s)

                                                
                                    
Copied to clipboard