Test Report: Docker_Linux_containerd_arm64 17585

                    
                      ea770f64c27c5646b2ec1dfcd286218478f671de:2023-11-08:31788
                    
                

Test fail (8/308)

x
+
TestAddons/parallel/Ingress (39.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-257591 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-257591 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-257591 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [83c5958f-19f6-45a7-b5ba-be264a7a83c8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [83c5958f-19f6-45a7-b5ba-be264a7a83c8] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.014114103s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-257591 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.046283994s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-257591 addons disable ingress-dns --alsologtostderr -v=1: (1.164096107s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-257591 addons disable ingress --alsologtostderr -v=1: (7.867274558s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-257591
helpers_test.go:235: (dbg) docker inspect addons-257591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a083b8afe76e7a45c99271dc5a9df5eb7e089e970e8cda791281345a7b98daf2",
	        "Created": "2023-11-07T23:26:51.47389526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 259458,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:26:51.815679788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/a083b8afe76e7a45c99271dc5a9df5eb7e089e970e8cda791281345a7b98daf2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a083b8afe76e7a45c99271dc5a9df5eb7e089e970e8cda791281345a7b98daf2/hostname",
	        "HostsPath": "/var/lib/docker/containers/a083b8afe76e7a45c99271dc5a9df5eb7e089e970e8cda791281345a7b98daf2/hosts",
	        "LogPath": "/var/lib/docker/containers/a083b8afe76e7a45c99271dc5a9df5eb7e089e970e8cda791281345a7b98daf2/a083b8afe76e7a45c99271dc5a9df5eb7e089e970e8cda791281345a7b98daf2-json.log",
	        "Name": "/addons-257591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-257591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-257591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9d946923781823f036248c7f637e32fd8108a241c5a997a4fd1f53c78decfc1-init/diff:/var/lib/docker/overlay2/2ff5362f4db529bcd8a3ee4777c017c39b79e4e950c43f9c0d154fe3648aa161/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9d946923781823f036248c7f637e32fd8108a241c5a997a4fd1f53c78decfc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9d946923781823f036248c7f637e32fd8108a241c5a997a4fd1f53c78decfc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9d946923781823f036248c7f637e32fd8108a241c5a997a4fd1f53c78decfc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-257591",
	                "Source": "/var/lib/docker/volumes/addons-257591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-257591",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-257591",
	                "name.minikube.sigs.k8s.io": "addons-257591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe6d6eeda44b94619536e386a54c6a23bc8ca6ee61e62f11d6319064c5cc6860",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fe6d6eeda44b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-257591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a083b8afe76e",
	                        "addons-257591"
	                    ],
	                    "NetworkID": "9f8820101e6f5e9db2845543fbda26491f738e2c565bf0482359dbbc19401700",
	                    "EndpointID": "e4d7544fa1ad9d6c271ab95c7dcb98f833622fed624de2aed4c7bafc37b64df8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-257591 -n addons-257591
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-257591 logs -n 25: (1.681345921s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-746330   | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC |                     |
	|         | -p download-only-746330                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-746330   | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |                     |
	|         | -p download-only-746330                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| delete  | -p download-only-746330                                                                     | download-only-746330   | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| delete  | -p download-only-746330                                                                     | download-only-746330   | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| start   | --download-only -p                                                                          | download-docker-056831 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |                     |
	|         | download-docker-056831                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-056831                                                                   | download-docker-056831 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-022767   | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |                     |
	|         | binary-mirror-022767                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36359                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-022767                                                                     | binary-mirror-022767   | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| addons  | enable dashboard -p                                                                         | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |                     |
	|         | addons-257591                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |                     |
	|         | addons-257591                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-257591 --wait=true                                                                | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-257591 addons                                                                        | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | addons-257591                                                                               |                        |         |         |                     |                     |
	| ip      | addons-257591 ip                                                                            | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	| addons  | addons-257591 addons disable                                                                | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-257591 ssh cat                                                                       | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /opt/local-path-provisioner/pvc-9e3ec8d5-6b02-4665-bda0-da43e0c8626d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| ssh     | addons-257591 ssh curl -s                                                                   | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-257591 addons disable                                                                | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-257591 ip                                                                            | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	| addons  | addons-257591 addons disable                                                                | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-257591 addons disable                                                                | addons-257591          | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:26:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:26:27.488444  258995 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:26:27.488609  258995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:26:27.488620  258995 out.go:309] Setting ErrFile to fd 2...
	I1107 23:26:27.488626  258995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:26:27.488884  258995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:26:27.489363  258995 out.go:303] Setting JSON to false
	I1107 23:26:27.490295  258995 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7534,"bootTime":1699392054,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1107 23:26:27.490369  258995 start.go:138] virtualization:  
	I1107 23:26:27.492875  258995 out.go:177] * [addons-257591] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:26:27.495454  258995 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:26:27.497060  258995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:26:27.495660  258995 notify.go:220] Checking for updates...
	I1107 23:26:27.500803  258995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:26:27.502622  258995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1107 23:26:27.504605  258995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:26:27.506421  258995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:26:27.508320  258995 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:26:27.535390  258995 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:26:27.535512  258995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:26:27.615941  258995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:26:27.606510921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:26:27.616052  258995 docker.go:295] overlay module found
	I1107 23:26:27.618965  258995 out.go:177] * Using the docker driver based on user configuration
	I1107 23:26:27.620691  258995 start.go:298] selected driver: docker
	I1107 23:26:27.620711  258995 start.go:902] validating driver "docker" against <nil>
	I1107 23:26:27.620725  258995 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:26:27.621522  258995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:26:27.694207  258995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:26:27.684436424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:26:27.694370  258995 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:26:27.694603  258995 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:26:27.696562  258995 out.go:177] * Using Docker driver with root privileges
	I1107 23:26:27.698292  258995 cni.go:84] Creating CNI manager for ""
	I1107 23:26:27.698325  258995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:26:27.698338  258995 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:26:27.698352  258995 start_flags.go:323] config:
	{Name:addons-257591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-257591 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:26:27.700396  258995 out.go:177] * Starting control plane node addons-257591 in cluster addons-257591
	I1107 23:26:27.702315  258995 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1107 23:26:27.704009  258995 out.go:177] * Pulling base image ...
	I1107 23:26:27.705537  258995 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1107 23:26:27.705590  258995 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1107 23:26:27.705602  258995 cache.go:56] Caching tarball of preloaded images
	I1107 23:26:27.705689  258995 preload.go:174] Found /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1107 23:26:27.705704  258995 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1107 23:26:27.706099  258995 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/config.json ...
	I1107 23:26:27.706128  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/config.json: {Name:mka3e9a85b7b2775bb3af60a4bafae56633a32d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:27.706295  258995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:26:27.726120  258995 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:26:27.726247  258995 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:26:27.726271  258995 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 23:26:27.726281  258995 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 23:26:27.726289  258995 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:26:27.726298  258995 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1107 23:26:44.278305  258995 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1107 23:26:44.278349  258995 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:26:44.278428  258995 start.go:365] acquiring machines lock for addons-257591: {Name:mkaea752b5ccbbe7323dbc3afdc91e359378c6fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:26:44.278565  258995 start.go:369] acquired machines lock for "addons-257591" in 110.827µs
	I1107 23:26:44.278593  258995 start.go:93] Provisioning new machine with config: &{Name:addons-257591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-257591 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 23:26:44.278667  258995 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:26:44.280658  258995 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1107 23:26:44.280901  258995 start.go:159] libmachine.API.Create for "addons-257591" (driver="docker")
	I1107 23:26:44.280933  258995 client.go:168] LocalClient.Create starting
	I1107 23:26:44.281039  258995 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem
	I1107 23:26:45.006794  258995 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem
	I1107 23:26:45.734046  258995 cli_runner.go:164] Run: docker network inspect addons-257591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:26:45.751761  258995 cli_runner.go:211] docker network inspect addons-257591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:26:45.751843  258995 network_create.go:281] running [docker network inspect addons-257591] to gather additional debugging logs...
	I1107 23:26:45.751864  258995 cli_runner.go:164] Run: docker network inspect addons-257591
	W1107 23:26:45.769461  258995 cli_runner.go:211] docker network inspect addons-257591 returned with exit code 1
	I1107 23:26:45.769491  258995 network_create.go:284] error running [docker network inspect addons-257591]: docker network inspect addons-257591: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-257591 not found
	I1107 23:26:45.769504  258995 network_create.go:286] output of [docker network inspect addons-257591]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-257591 not found
	
	** /stderr **
	I1107 23:26:45.769627  258995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:26:45.787561  258995 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024fdf60}
	I1107 23:26:45.787604  258995 network_create.go:124] attempt to create docker network addons-257591 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 23:26:45.787664  258995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-257591 addons-257591
	I1107 23:26:45.857214  258995 network_create.go:108] docker network addons-257591 192.168.49.0/24 created
	I1107 23:26:45.857301  258995 kic.go:121] calculated static IP "192.168.49.2" for the "addons-257591" container
	I1107 23:26:45.857387  258995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:26:45.873811  258995 cli_runner.go:164] Run: docker volume create addons-257591 --label name.minikube.sigs.k8s.io=addons-257591 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:26:45.895129  258995 oci.go:103] Successfully created a docker volume addons-257591
	I1107 23:26:45.895218  258995 cli_runner.go:164] Run: docker run --rm --name addons-257591-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-257591 --entrypoint /usr/bin/test -v addons-257591:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:26:47.112529  258995 cli_runner.go:217] Completed: docker run --rm --name addons-257591-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-257591 --entrypoint /usr/bin/test -v addons-257591:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.217268959s)
	I1107 23:26:47.112567  258995 oci.go:107] Successfully prepared a docker volume addons-257591
	I1107 23:26:47.112607  258995 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1107 23:26:47.112631  258995 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:26:47.112708  258995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-257591:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:26:51.389932  258995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-257591:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.277182566s)
	I1107 23:26:51.389964  258995 kic.go:203] duration metric: took 4.277330 seconds to extract preloaded images to volume
	W1107 23:26:51.390129  258995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:26:51.390237  258995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:26:51.455968  258995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-257591 --name addons-257591 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-257591 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-257591 --network addons-257591 --ip 192.168.49.2 --volume addons-257591:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:26:51.825849  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Running}}
	I1107 23:26:51.846015  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:26:51.867945  258995 cli_runner.go:164] Run: docker exec addons-257591 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:26:51.957328  258995 oci.go:144] the created container "addons-257591" has a running status.
	I1107 23:26:51.957358  258995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa...
	I1107 23:26:52.429428  258995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:26:52.473137  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:26:52.515303  258995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:26:52.515336  258995 kic_runner.go:114] Args: [docker exec --privileged addons-257591 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:26:52.608325  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:26:52.644027  258995 machine.go:88] provisioning docker machine ...
	I1107 23:26:52.644060  258995 ubuntu.go:169] provisioning hostname "addons-257591"
	I1107 23:26:52.644135  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:52.684561  258995 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:52.686650  258995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1107 23:26:52.686672  258995 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-257591 && echo "addons-257591" | sudo tee /etc/hostname
	I1107 23:26:52.879748  258995 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-257591
	
	I1107 23:26:52.879913  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:52.905272  258995 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:52.905684  258995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1107 23:26:52.905709  258995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-257591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-257591/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-257591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:26:53.043272  258995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:26:53.043303  258995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-253150/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-253150/.minikube}
	I1107 23:26:53.043327  258995 ubuntu.go:177] setting up certificates
	I1107 23:26:53.043337  258995 provision.go:83] configureAuth start
	I1107 23:26:53.043404  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-257591
	I1107 23:26:53.063962  258995 provision.go:138] copyHostCerts
	I1107 23:26:53.064085  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-253150/.minikube/ca.pem (1078 bytes)
	I1107 23:26:53.064284  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-253150/.minikube/cert.pem (1123 bytes)
	I1107 23:26:53.064410  258995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-253150/.minikube/key.pem (1675 bytes)
	I1107 23:26:53.064531  258995 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca-key.pem org=jenkins.addons-257591 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-257591]
	I1107 23:26:53.454319  258995 provision.go:172] copyRemoteCerts
	I1107 23:26:53.454410  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:26:53.454472  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:53.471876  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:26:53.564223  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:26:53.593568  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 23:26:53.623155  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:26:53.652632  258995 provision.go:86] duration metric: configureAuth took 609.26672ms
	I1107 23:26:53.652666  258995 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:26:53.652861  258995 config.go:182] Loaded profile config "addons-257591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:26:53.652869  258995 machine.go:91] provisioned docker machine in 1.00882134s
	I1107 23:26:53.652875  258995 client.go:171] LocalClient.Create took 9.371936571s
	I1107 23:26:53.652898  258995 start.go:167] duration metric: libmachine.API.Create for "addons-257591" took 9.371998019s
	I1107 23:26:53.652906  258995 start.go:300] post-start starting for "addons-257591" (driver="docker")
	I1107 23:26:53.652914  258995 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:26:53.652965  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:26:53.653006  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:53.671306  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:26:53.764251  258995 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:26:53.768295  258995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:26:53.768335  258995 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:26:53.768346  258995 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:26:53.768353  258995 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:26:53.768363  258995 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-253150/.minikube/addons for local assets ...
	I1107 23:26:53.768429  258995 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-253150/.minikube/files for local assets ...
	I1107 23:26:53.768457  258995 start.go:303] post-start completed in 115.545984ms
	I1107 23:26:53.768760  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-257591
	I1107 23:26:53.786584  258995 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/config.json ...
	I1107 23:26:53.786866  258995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:26:53.786917  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:53.807583  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:26:53.895179  258995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:26:53.901252  258995 start.go:128] duration metric: createHost completed in 9.622569315s
	I1107 23:26:53.901281  258995 start.go:83] releasing machines lock for "addons-257591", held for 9.622705036s
	I1107 23:26:53.901352  258995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-257591
	I1107 23:26:53.923360  258995 ssh_runner.go:195] Run: cat /version.json
	I1107 23:26:53.923420  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:53.923664  258995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:26:53.923726  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:26:53.942685  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:26:53.954105  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:26:54.038518  258995 ssh_runner.go:195] Run: systemctl --version
	I1107 23:26:54.247358  258995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:26:54.252990  258995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1107 23:26:54.284014  258995 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:26:54.284157  258995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:26:54.317516  258995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:26:54.317541  258995 start.go:472] detecting cgroup driver to use...
	I1107 23:26:54.317574  258995 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:26:54.317625  258995 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 23:26:54.332271  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 23:26:54.346022  258995 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:26:54.346118  258995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:26:54.362237  258995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:26:54.378040  258995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:26:54.473319  258995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:26:54.566881  258995 docker.go:219] disabling docker service ...
	I1107 23:26:54.566945  258995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:26:54.589085  258995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:26:54.603750  258995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:26:54.702880  258995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:26:54.803965  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:26:54.817715  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:26:54.837323  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1107 23:26:54.849718  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1107 23:26:54.861750  258995 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1107 23:26:54.861859  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1107 23:26:54.874424  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:26:54.886707  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1107 23:26:54.899486  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:26:54.911287  258995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:26:54.922544  258995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1107 23:26:54.934236  258995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:26:54.944471  258995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:26:54.955200  258995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:26:55.055277  258995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 23:26:55.210660  258995 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1107 23:26:55.210797  258995 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1107 23:26:55.215954  258995 start.go:540] Will wait 60s for crictl version
	I1107 23:26:55.216071  258995 ssh_runner.go:195] Run: which crictl
	I1107 23:26:55.220679  258995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:26:55.264932  258995 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1107 23:26:55.265060  258995 ssh_runner.go:195] Run: containerd --version
	I1107 23:26:55.293103  258995 ssh_runner.go:195] Run: containerd --version
	I1107 23:26:55.323253  258995 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1107 23:26:55.325057  258995 cli_runner.go:164] Run: docker network inspect addons-257591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:26:55.343741  258995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1107 23:26:55.348468  258995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:26:55.362564  258995 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1107 23:26:55.362638  258995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:26:55.403872  258995 containerd.go:604] all images are preloaded for containerd runtime.
	I1107 23:26:55.403896  258995 containerd.go:518] Images already preloaded, skipping extraction
	I1107 23:26:55.403955  258995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:26:55.445597  258995 containerd.go:604] all images are preloaded for containerd runtime.
	I1107 23:26:55.445624  258995 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:26:55.445713  258995 ssh_runner.go:195] Run: sudo crictl info
	I1107 23:26:55.488240  258995 cni.go:84] Creating CNI manager for ""
	I1107 23:26:55.488266  258995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:26:55.488295  258995 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:26:55.488314  258995 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-257591 NodeName:addons-257591 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:26:55.488461  258995 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-257591"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:26:55.488530  258995 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-257591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-257591 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:26:55.488599  258995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:26:55.499536  258995 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:26:55.499662  258995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:26:55.510450  258995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1107 23:26:55.531809  258995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:26:55.553336  258995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1107 23:26:55.574601  258995 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:26:55.579199  258995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:26:55.593012  258995 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591 for IP: 192.168.49.2
	I1107 23:26:55.593097  258995 certs.go:190] acquiring lock for shared ca certs: {Name:mk29255a37c97dfa8464e8fe04cc7357102af55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:55.593770  258995 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key
	I1107 23:26:55.922820  258995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt ...
	I1107 23:26:55.922852  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt: {Name:mk4e9cb743789e55194f113fe04c3c885827f98e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:55.923050  258995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key ...
	I1107 23:26:55.923068  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key: {Name:mkeacbc6b947ca23d9ebff583eebce42686d5d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:55.923673  258995 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key
	I1107 23:26:56.084855  258995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.crt ...
	I1107 23:26:56.084885  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.crt: {Name:mkfb8efebd9d8df084ef7e8a095abe52e84ea516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:56.085067  258995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key ...
	I1107 23:26:56.085080  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key: {Name:mke02911ce524ee820f13223413c5f2867927d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:56.085625  258995 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.key
	I1107 23:26:56.085649  258995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt with IP's: []
	I1107 23:26:56.522919  258995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt ...
	I1107 23:26:56.522953  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: {Name:mk58c4a25ec4c24045dc42fafe51d000ba60ad57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:56.523546  258995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.key ...
	I1107 23:26:56.523564  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.key: {Name:mk2c33ec45f61d77737cefed8f961c6b6bbf8d42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:56.523658  258995 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.key.dd3b5fb2
	I1107 23:26:56.523680  258995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:26:57.491964  258995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.crt.dd3b5fb2 ...
	I1107 23:26:57.492638  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.crt.dd3b5fb2: {Name:mkc8fc4cabd0cf9cf284af11b637098fbd517a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:57.492817  258995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.key.dd3b5fb2 ...
	I1107 23:26:57.492831  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.key.dd3b5fb2: {Name:mk9511d4c26ea84f662a2579499844d637f2898e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:57.493327  258995 certs.go:337] copying /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.crt
	I1107 23:26:57.493427  258995 certs.go:341] copying /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.key
	I1107 23:26:57.493482  258995 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.key
	I1107 23:26:57.493507  258995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.crt with IP's: []
	I1107 23:26:57.634307  258995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.crt ...
	I1107 23:26:57.634340  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.crt: {Name:mk44db8a9be86f7f9e04230b3948f62dd6d68a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:57.634873  258995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.key ...
	I1107 23:26:57.634892  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.key: {Name:mk3ea62a8baf507abf974aa3e5e3836732e87c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:57.635090  258995 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:26:57.635172  258995 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:26:57.635206  258995 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:26:57.635242  258995 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem (1675 bytes)
	I1107 23:26:57.635940  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:26:57.666468  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:26:57.696479  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:26:57.725707  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:26:57.755577  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:26:57.785526  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:26:57.815786  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:26:57.846472  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:26:57.875993  258995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:26:57.905354  258995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:26:57.927159  258995 ssh_runner.go:195] Run: openssl version
	I1107 23:26:57.934455  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:26:57.946683  258995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:57.951826  258995 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:57.951936  258995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:57.961053  258995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:26:57.973750  258995 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:26:57.978374  258995 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:26:57.978425  258995 kubeadm.go:404] StartCluster: {Name:addons-257591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-257591 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:26:57.978498  258995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1107 23:26:57.978564  258995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:26:58.027430  258995 cri.go:89] found id: ""
	I1107 23:26:58.027546  258995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:26:58.040853  258995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:26:58.054393  258995 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:26:58.054470  258995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:26:58.066456  258995 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:26:58.066503  258995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:26:58.120783  258995 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:26:58.120904  258995 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:26:58.171795  258995 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:26:58.171909  258995 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:26:58.171969  258995 kubeadm.go:322] OS: Linux
	I1107 23:26:58.172035  258995 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:26:58.172108  258995 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:26:58.172172  258995 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:26:58.172261  258995 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:26:58.172334  258995 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:26:58.172409  258995 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:26:58.172492  258995 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1107 23:26:58.172567  258995 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1107 23:26:58.172639  258995 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1107 23:26:58.256573  258995 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:26:58.256740  258995 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:26:58.256871  258995 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:26:58.513641  258995 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:26:58.516122  258995 out.go:204]   - Generating certificates and keys ...
	I1107 23:26:58.516303  258995 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:26:58.516417  258995 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:26:59.155251  258995 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:26:59.396298  258995 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:26:59.554011  258995 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:26:59.875772  258995 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:27:00.010406  258995 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:27:00.010601  258995 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-257591 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:27:00.391675  258995 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:27:00.392043  258995 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-257591 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:27:00.967069  258995 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:27:01.802747  258995 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:27:03.010395  258995 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:27:03.010926  258995 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:27:03.642210  258995 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:27:04.113629  258995 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:27:04.641873  258995 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:27:05.834499  258995 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:27:05.835205  258995 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:27:05.837903  258995 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:27:05.840101  258995 out.go:204]   - Booting up control plane ...
	I1107 23:27:05.840233  258995 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:27:05.840308  258995 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:27:05.840981  258995 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:27:05.855962  258995 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:27:05.858831  258995 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:27:05.859329  258995 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:27:05.960215  258995 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:27:14.964518  258995 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002615 seconds
	I1107 23:27:14.964638  258995 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:27:14.978051  258995 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:27:15.503154  258995 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:27:15.503382  258995 kubeadm.go:322] [mark-control-plane] Marking the node addons-257591 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:27:16.019095  258995 kubeadm.go:322] [bootstrap-token] Using token: o05c9u.3aboh7mmbhmvlcdn
	I1107 23:27:16.021044  258995 out.go:204]   - Configuring RBAC rules ...
	I1107 23:27:16.021166  258995 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:27:16.027085  258995 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:27:16.037694  258995 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:27:16.042190  258995 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:27:16.046159  258995 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:27:16.050174  258995 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:27:16.066895  258995 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:27:16.310890  258995 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:27:16.441698  258995 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:27:16.442978  258995 kubeadm.go:322] 
	I1107 23:27:16.443045  258995 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:27:16.443053  258995 kubeadm.go:322] 
	I1107 23:27:16.443142  258995 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:27:16.443176  258995 kubeadm.go:322] 
	I1107 23:27:16.443202  258995 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:27:16.443261  258995 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:27:16.443313  258995 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:27:16.443321  258995 kubeadm.go:322] 
	I1107 23:27:16.443372  258995 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:27:16.443380  258995 kubeadm.go:322] 
	I1107 23:27:16.443429  258995 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:27:16.443437  258995 kubeadm.go:322] 
	I1107 23:27:16.443486  258995 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:27:16.443560  258995 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:27:16.443630  258995 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:27:16.443641  258995 kubeadm.go:322] 
	I1107 23:27:16.443719  258995 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:27:16.443795  258995 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:27:16.443805  258995 kubeadm.go:322] 
	I1107 23:27:16.443888  258995 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o05c9u.3aboh7mmbhmvlcdn \
	I1107 23:27:16.443988  258995 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e24392fb732769393a2f48b7656045863010b5e31bad5114f11c508fcda3c9 \
	I1107 23:27:16.444011  258995 kubeadm.go:322] 	--control-plane 
	I1107 23:27:16.444019  258995 kubeadm.go:322] 
	I1107 23:27:16.444098  258995 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:27:16.444106  258995 kubeadm.go:322] 
	I1107 23:27:16.444183  258995 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o05c9u.3aboh7mmbhmvlcdn \
	I1107 23:27:16.444504  258995 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e24392fb732769393a2f48b7656045863010b5e31bad5114f11c508fcda3c9 
	I1107 23:27:16.448549  258995 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:27:16.448684  258995 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:27:16.448726  258995 cni.go:84] Creating CNI manager for ""
	I1107 23:27:16.448739  258995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:27:16.452231  258995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:27:16.453950  258995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:27:16.459931  258995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:27:16.459954  258995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:27:16.502479  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:27:17.419868  258995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:27:17.420028  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:17.420120  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=addons-257591 minikube.k8s.io/updated_at=2023_11_07T23_27_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:17.622265  258995 ops.go:34] apiserver oom_adj: -16
	I1107 23:27:17.622359  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:17.724789  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:18.319381  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:18.819384  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:19.319460  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:19.819781  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:20.319138  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:20.819723  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:21.319577  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:21.819126  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:22.319727  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:22.819113  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:23.320015  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:23.819683  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:24.319823  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:24.820026  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:25.319686  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:25.819618  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:26.319153  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:26.819230  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:27.319873  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:27.819708  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:28.319870  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:28.819518  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:29.319865  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:29.819307  258995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:29.979917  258995 kubeadm.go:1081] duration metric: took 12.559951407s to wait for elevateKubeSystemPrivileges.
	I1107 23:27:29.979946  258995 kubeadm.go:406] StartCluster complete in 32.001523072s
	I1107 23:27:29.979962  258995 settings.go:142] acquiring lock: {Name:mk0c44fb0eb9743c4797be21f306bacb6fb52d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:27:29.980566  258995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:27:29.981007  258995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/kubeconfig: {Name:mk8224b7929d8ccd4d6d2717b272fe897cc064e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:27:29.981215  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:27:29.981612  258995 config.go:182] Loaded profile config "addons-257591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:27:29.981723  258995 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1107 23:27:29.981839  258995 addons.go:69] Setting volumesnapshots=true in profile "addons-257591"
	I1107 23:27:29.981853  258995 addons.go:231] Setting addon volumesnapshots=true in "addons-257591"
	I1107 23:27:29.981925  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:29.982544  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:29.983569  258995 addons.go:69] Setting inspektor-gadget=true in profile "addons-257591"
	I1107 23:27:29.983589  258995 addons.go:231] Setting addon inspektor-gadget=true in "addons-257591"
	I1107 23:27:29.983625  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:29.984094  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:29.984447  258995 addons.go:69] Setting metrics-server=true in profile "addons-257591"
	I1107 23:27:29.984475  258995 addons.go:231] Setting addon metrics-server=true in "addons-257591"
	I1107 23:27:29.984530  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:29.985055  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:29.985711  258995 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-257591"
	I1107 23:27:29.985732  258995 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-257591"
	I1107 23:27:29.985782  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:29.986187  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:29.993739  258995 addons.go:69] Setting cloud-spanner=true in profile "addons-257591"
	I1107 23:27:29.995741  258995 addons.go:231] Setting addon cloud-spanner=true in "addons-257591"
	I1107 23:27:30.000042  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:29.995483  258995 addons.go:69] Setting registry=true in profile "addons-257591"
	I1107 23:27:29.995499  258995 addons.go:69] Setting storage-provisioner=true in profile "addons-257591"
	I1107 23:27:29.995510  258995 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-257591"
	I1107 23:27:29.995696  258995 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-257591"
	I1107 23:27:29.995706  258995 addons.go:69] Setting default-storageclass=true in profile "addons-257591"
	I1107 23:27:29.995710  258995 addons.go:69] Setting gcp-auth=true in profile "addons-257591"
	I1107 23:27:29.995714  258995 addons.go:69] Setting ingress=true in profile "addons-257591"
	I1107 23:27:29.995718  258995 addons.go:69] Setting ingress-dns=true in profile "addons-257591"
	I1107 23:27:30.001043  258995 addons.go:231] Setting addon ingress-dns=true in "addons-257591"
	I1107 23:27:30.001126  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.001636  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.006682  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.025326  258995 addons.go:231] Setting addon registry=true in "addons-257591"
	I1107 23:27:30.025461  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.026030  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.046701  258995 addons.go:231] Setting addon storage-provisioner=true in "addons-257591"
	I1107 23:27:30.046857  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.047636  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.075639  258995 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-257591"
	I1107 23:27:30.076023  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.118640  258995 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-257591"
	I1107 23:27:30.118726  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.119223  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.145591  258995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-257591"
	I1107 23:27:30.146015  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.172629  258995 mustload.go:65] Loading cluster: addons-257591
	I1107 23:27:30.172870  258995 config.go:182] Loaded profile config "addons-257591": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:27:30.173140  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.214161  258995 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1107 23:27:30.213212  258995 addons.go:231] Setting addon ingress=true in "addons-257591"
	I1107 23:27:30.223966  258995 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1107 23:27:30.224127  258995 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1107 23:27:30.224134  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1107 23:27:30.224364  258995 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:27:30.224422  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.232621  258995 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1107 23:27:30.234727  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:27:30.234741  258995 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1107 23:27:30.235328  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.235347  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1107 23:27:30.234752  258995 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1107 23:27:30.245405  258995 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:27:30.245510  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1107 23:27:30.245631  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.239846  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.240009  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.240077  258995 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1107 23:27:30.281729  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1107 23:27:30.281869  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.301499  258995 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-257591" context rescaled to 1 replicas
	I1107 23:27:30.301539  258995 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 23:27:30.240440  258995 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:27:30.337281  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1107 23:27:30.337383  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.344866  258995 out.go:177] * Verifying Kubernetes components...
	I1107 23:27:30.408899  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:27:30.427788  258995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:27:30.424981  258995 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-257591"
	I1107 23:27:30.430332  258995 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1107 23:27:30.436129  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1107 23:27:30.436215  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.439379  258995 out.go:177]   - Using image docker.io/registry:2.8.3
	I1107 23:27:30.441627  258995 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1107 23:27:30.439352  258995 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:27:30.439615  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.444510  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.460255  258995 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1107 23:27:30.460279  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1107 23:27:30.460344  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.474199  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1107 23:27:30.478703  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1107 23:27:30.474452  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:27:30.475425  258995 addons.go:231] Setting addon default-storageclass=true in "addons-257591"
	I1107 23:27:30.475652  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.486599  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1107 23:27:30.483999  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.484037  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:30.491568  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:30.516693  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1107 23:27:30.518974  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1107 23:27:30.525217  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1107 23:27:30.532148  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1107 23:27:30.547308  258995 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1107 23:27:30.549210  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1107 23:27:30.549259  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1107 23:27:30.549412  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.585365  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.586361  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.587777  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.592245  258995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1107 23:27:30.594100  258995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:27:30.596340  258995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:27:30.599902  258995 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:27:30.599961  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1107 23:27:30.600060  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.615772  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.616695  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.668142  258995 node_ready.go:35] waiting up to 6m0s for node "addons-257591" to be "Ready" ...
	I1107 23:27:30.669079  258995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:27:30.697861  258995 node_ready.go:49] node "addons-257591" has status "Ready":"True"
	I1107 23:27:30.697893  258995 node_ready.go:38] duration metric: took 29.72205ms waiting for node "addons-257591" to be "Ready" ...
	I1107 23:27:30.697903  258995 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:27:30.704802  258995 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1107 23:27:30.707105  258995 out.go:177]   - Using image docker.io/busybox:stable
	I1107 23:27:30.709268  258995 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:27:30.709287  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1107 23:27:30.709352  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.715864  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.722873  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.751153  258995 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:27:30.751172  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:27:30.751246  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:30.754168  258995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:30.768341  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.782705  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.784640  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.832123  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:30.835623  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	W1107 23:27:30.836598  258995 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1107 23:27:30.836628  258995 retry.go:31] will retry after 261.932696ms: ssh: handshake failed: EOF
	I1107 23:27:31.277900  258995 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:27:31.277928  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1107 23:27:31.343284  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:27:31.393861  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1107 23:27:31.442462  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:27:31.503460  258995 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1107 23:27:31.503488  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1107 23:27:31.510323  258995 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1107 23:27:31.510352  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1107 23:27:31.534637  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:27:31.641655  258995 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1107 23:27:31.641688  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1107 23:27:31.664582  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:27:31.673613  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1107 23:27:31.673641  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1107 23:27:31.674809  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:27:31.733391  258995 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:27:31.733413  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:27:31.757426  258995 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1107 23:27:31.757454  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1107 23:27:31.802558  258995 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:27:31.802581  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1107 23:27:31.859894  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:27:31.961170  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:27:31.965024  258995 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1107 23:27:31.965056  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1107 23:27:32.047072  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1107 23:27:32.047099  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1107 23:27:32.096611  258995 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:27:32.096638  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:27:32.137505  258995 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1107 23:27:32.137533  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1107 23:27:32.271171  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1107 23:27:32.271199  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1107 23:27:32.347658  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:27:32.368025  258995 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1107 23:27:32.368053  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1107 23:27:32.379708  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1107 23:27:32.379737  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1107 23:27:32.410007  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1107 23:27:32.410040  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1107 23:27:32.616954  258995 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:27:32.616981  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1107 23:27:32.623757  258995 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1107 23:27:32.623781  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1107 23:27:32.687032  258995 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1107 23:27:32.687116  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1107 23:27:32.851447  258995 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.182337242s)
	I1107 23:27:32.851481  258995 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1107 23:27:32.864001  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:32.881394  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.538067936s)
	I1107 23:27:33.018895  258995 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1107 23:27:33.018985  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1107 23:27:33.034134  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:27:33.115238  258995 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1107 23:27:33.115338  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1107 23:27:33.162437  258995 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1107 23:27:33.162462  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1107 23:27:33.212576  258995 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1107 23:27:33.212604  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1107 23:27:33.277976  258995 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:27:33.278001  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1107 23:27:33.282173  258995 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1107 23:27:33.282195  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1107 23:27:33.423806  258995 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1107 23:27:33.423830  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1107 23:27:33.497581  258995 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:27:33.497606  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1107 23:27:33.523089  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:27:33.541779  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:27:34.379884  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.985982814s)
	I1107 23:27:34.379985  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.937457122s)
	I1107 23:27:34.940587  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.405912984s)
	I1107 23:27:35.197072  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.532451813s)
	I1107 23:27:35.318636  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:37.306087  258995 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1107 23:27:37.306200  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:37.335989  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:37.366835  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:37.512387  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.837540965s)
	I1107 23:27:37.512472  258995 addons.go:467] Verifying addon ingress=true in "addons-257591"
	I1107 23:27:37.514508  258995 out.go:177] * Verifying ingress addon...
	I1107 23:27:37.512652  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.65272287s)
	I1107 23:27:37.512726  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.551534086s)
	I1107 23:27:37.512784  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.165099542s)
	I1107 23:27:37.512860  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.478642092s)
	W1107 23:27:37.514983  258995 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:27:37.514692  258995 addons.go:467] Verifying addon registry=true in "addons-257591"
	I1107 23:27:37.517014  258995 out.go:177] * Verifying registry addon...
	I1107 23:27:37.515202  258995 retry.go:31] will retry after 174.710281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:27:37.514906  258995 addons.go:467] Verifying addon metrics-server=true in "addons-257591"
	I1107 23:27:37.520310  258995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1107 23:27:37.522263  258995 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1107 23:27:37.534699  258995 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1107 23:27:37.534722  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:37.540392  258995 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1107 23:27:37.540421  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:37.547228  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:37.559281  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:37.675000  258995 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1107 23:27:37.694735  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:27:37.792931  258995 addons.go:231] Setting addon gcp-auth=true in "addons-257591"
	I1107 23:27:37.793039  258995 host.go:66] Checking if "addons-257591" exists ...
	I1107 23:27:37.793651  258995 cli_runner.go:164] Run: docker container inspect addons-257591 --format={{.State.Status}}
	I1107 23:27:37.831927  258995 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1107 23:27:37.831995  258995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-257591
	I1107 23:27:37.874790  258995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/addons-257591/id_rsa Username:docker}
	I1107 23:27:38.052662  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:38.069023  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:38.555371  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:38.571189  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:39.073722  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:39.091244  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.568093167s)
	I1107 23:27:39.091338  258995 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-257591"
	I1107 23:27:39.093700  258995 out.go:177] * Verifying csi-hostpath-driver addon...
	I1107 23:27:39.091633  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.549815752s)
	I1107 23:27:39.096713  258995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1107 23:27:39.114197  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:39.138285  258995 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1107 23:27:39.138311  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:39.161450  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:39.551398  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:39.565000  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:39.595409  258995 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.763441789s)
	I1107 23:27:39.599179  258995 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1107 23:27:39.595583  258995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.900675585s)
	I1107 23:27:39.601288  258995 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:27:39.603196  258995 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1107 23:27:39.603222  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1107 23:27:39.633999  258995 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1107 23:27:39.634027  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1107 23:27:39.662389  258995 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:27:39.662414  258995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1107 23:27:39.667604  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:39.689949  258995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:27:39.824306  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:40.052935  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:40.065748  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:40.168245  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:40.575832  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:40.579347  258995 addons.go:467] Verifying addon gcp-auth=true in "addons-257591"
	I1107 23:27:40.581415  258995 out.go:177] * Verifying gcp-auth addon...
	I1107 23:27:40.584183  258995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1107 23:27:40.593590  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:40.604619  258995 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1107 23:27:40.604643  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:40.611114  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:40.668186  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:41.052910  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:41.066525  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:41.115317  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:41.168664  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:41.551908  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:41.564385  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:41.615395  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:41.668827  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:42.052913  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:42.065354  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:42.116631  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:42.171081  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:42.322169  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:42.553006  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:42.565534  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:42.615488  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:42.668519  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:43.054662  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:43.066736  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:43.116256  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:43.168219  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:43.552196  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:43.564869  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:43.614765  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:43.668250  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:44.052453  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:44.067263  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:44.116080  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:44.169563  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:44.553097  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:44.564787  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:44.615428  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:44.668731  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:44.821564  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:45.113633  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:45.117006  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:45.125918  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:45.168957  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:45.552623  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:45.564739  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:45.615930  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:45.668218  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:46.052777  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:46.065556  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:46.116058  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:46.171101  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:46.552367  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:46.564416  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:46.615469  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:46.667653  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:47.052250  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:47.064685  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:47.115377  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:47.168068  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:47.319970  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:47.552133  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:47.565077  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:47.615564  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:47.668032  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:48.052226  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:48.064973  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:48.115027  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:48.167786  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:48.552786  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:48.564711  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:48.614758  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:48.671277  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:49.052351  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:49.063950  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:49.115276  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:49.167495  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:49.552016  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:49.564427  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:49.615149  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:49.668526  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:49.819052  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:50.052966  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:50.065457  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:50.116074  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:50.167974  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:50.552591  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:50.564232  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:50.615802  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:50.667475  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:51.052654  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:51.064947  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:51.116004  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:51.167546  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:51.551644  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:51.564257  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:51.615580  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:51.672104  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:52.052341  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:52.063884  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:52.114773  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:52.168201  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:52.319021  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:52.552237  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:52.563702  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:52.615291  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:52.668066  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:53.051988  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:53.064782  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:53.116916  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:53.168338  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:53.551944  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:53.565572  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:53.615269  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:53.668234  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:54.052938  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:54.065619  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:54.115854  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:54.167751  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:54.552405  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:54.563949  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:54.615648  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:54.667605  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:54.819303  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:55.053066  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:55.065280  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:55.115281  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:55.168308  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:55.552202  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:55.565289  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:55.615001  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:55.668862  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:56.051700  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:56.064663  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:56.115360  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:56.167863  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:56.552011  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:56.564588  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:56.615153  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:56.667845  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:57.052964  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:57.065313  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:57.115578  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:57.167681  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:57.318828  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:27:57.552070  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:57.564796  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:57.614936  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:57.667465  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:58.052088  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:58.065152  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:58.115331  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:58.167392  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:58.555288  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:58.563889  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:58.614842  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:58.667367  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:59.052247  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:59.063753  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:59.115599  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:59.166901  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:59.552224  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:27:59.563881  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:27:59.615666  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:27:59.667471  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:27:59.817925  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:28:00.065600  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:00.077215  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:00.117958  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:00.174022  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:00.551902  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:00.565116  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:00.615081  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:00.667194  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:01.051904  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:01.064449  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:01.115754  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:01.167844  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:01.552350  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:01.564373  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:01.615178  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:01.669170  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:01.820506  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:28:02.051645  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:02.064585  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:02.116194  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:02.171189  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:02.554596  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:02.566451  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:02.615627  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:02.670653  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:03.052704  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:03.065466  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:03.115299  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:03.168292  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:03.552541  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:03.564830  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:03.617577  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:03.667539  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:04.052355  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:04.063877  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:04.114921  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:04.167129  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:04.319183  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:28:04.553266  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:04.568110  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:04.614848  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:04.667822  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:05.052264  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:05.064707  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:05.115572  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:05.168426  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:05.552809  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:05.565252  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:05.615198  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:05.668221  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:06.052529  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:06.064740  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:06.115104  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:06.167604  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:06.552113  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:06.564574  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:06.615322  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:06.667328  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:06.819642  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:28:07.053050  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:07.065011  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:07.115913  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:07.168748  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:07.551760  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:07.564201  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:07.615467  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:07.666939  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:08.051822  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:08.064433  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:08.115278  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:08.168503  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:08.552282  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:08.564455  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:08.615366  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:08.667691  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:09.052441  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:09.064334  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:09.115530  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:09.167581  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:09.318755  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:28:09.552491  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:09.564562  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:09.616036  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:09.667114  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:10.052505  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:10.065118  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:10.115609  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:10.168152  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:10.552977  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:10.565282  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:10.615241  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:10.668566  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:11.052568  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:11.065346  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:11.115613  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:11.169324  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:11.318941  258995 pod_ready.go:102] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"False"
	I1107 23:28:11.552124  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:11.565569  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:11.615444  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:11.669658  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:12.052630  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:12.066210  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:12.116327  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:12.168492  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:12.552090  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:12.565458  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:12.615535  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:12.669163  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:13.053017  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:13.065568  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:13.115694  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:13.168504  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:13.319526  258995 pod_ready.go:92] pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:13.319601  258995 pod_ready.go:81] duration metric: took 42.565348526s waiting for pod "coredns-5dd5756b68-mfz4n" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.319628  258995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbnlj" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.322251  258995 pod_ready.go:97] error getting pod "coredns-5dd5756b68-pbnlj" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbnlj" not found
	I1107 23:28:13.322333  258995 pod_ready.go:81] duration metric: took 2.667293ms waiting for pod "coredns-5dd5756b68-pbnlj" in "kube-system" namespace to be "Ready" ...
	E1107 23:28:13.322359  258995 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-pbnlj" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbnlj" not found
	I1107 23:28:13.322394  258995 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.328253  258995 pod_ready.go:92] pod "etcd-addons-257591" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:13.328282  258995 pod_ready.go:81] duration metric: took 5.863151ms waiting for pod "etcd-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.328300  258995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.334849  258995 pod_ready.go:92] pod "kube-apiserver-addons-257591" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:13.334873  258995 pod_ready.go:81] duration metric: took 6.533607ms waiting for pod "kube-apiserver-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.334885  258995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.341532  258995 pod_ready.go:92] pod "kube-controller-manager-addons-257591" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:13.341558  258995 pod_ready.go:81] duration metric: took 6.664586ms waiting for pod "kube-controller-manager-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.341571  258995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4dmv5" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.516399  258995 pod_ready.go:92] pod "kube-proxy-4dmv5" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:13.516426  258995 pod_ready.go:81] duration metric: took 174.847219ms waiting for pod "kube-proxy-4dmv5" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.516439  258995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.552335  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:13.564730  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:13.615190  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:13.667707  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:13.915929  258995 pod_ready.go:92] pod "kube-scheduler-addons-257591" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:13.915958  258995 pod_ready.go:81] duration metric: took 399.511253ms waiting for pod "kube-scheduler-addons-257591" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:13.915969  258995 pod_ready.go:38] duration metric: took 43.21805377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:28:13.915983  258995 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:28:13.916048  258995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:28:13.946507  258995 api_server.go:72] duration metric: took 43.644927846s to wait for apiserver process to appear ...
	I1107 23:28:13.946535  258995 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:28:13.946552  258995 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1107 23:28:13.959585  258995 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1107 23:28:13.961205  258995 api_server.go:141] control plane version: v1.28.3
	I1107 23:28:13.961263  258995 api_server.go:131] duration metric: took 14.71937ms to wait for apiserver health ...
	I1107 23:28:13.961280  258995 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:28:14.052923  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:14.065201  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:14.117112  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:14.126512  258995 system_pods.go:59] 18 kube-system pods found
	I1107 23:28:14.126598  258995 system_pods.go:61] "coredns-5dd5756b68-mfz4n" [b09e69b5-3df8-4407-93d1-230494a22a84] Running
	I1107 23:28:14.126621  258995 system_pods.go:61] "csi-hostpath-attacher-0" [6f1e04e2-0e16-4b2c-88e7-45cdc76b989c] Running
	I1107 23:28:14.126661  258995 system_pods.go:61] "csi-hostpath-resizer-0" [b1f3f59d-cfcf-42a2-8d41-d30c485b3e8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:28:14.126686  258995 system_pods.go:61] "csi-hostpathplugin-mgtgt" [cd7dd3ad-c9ca-415f-b8ed-a7633daaf8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:28:14.126710  258995 system_pods.go:61] "etcd-addons-257591" [2bf2baff-98a3-4a5e-981d-c12fa0f1e783] Running
	I1107 23:28:14.126729  258995 system_pods.go:61] "kindnet-fgpk2" [2d92315e-eefa-46c0-acec-1a9e22b6b815] Running
	I1107 23:28:14.126760  258995 system_pods.go:61] "kube-apiserver-addons-257591" [566d85b4-2b23-48d3-8dba-194e0217fafb] Running
	I1107 23:28:14.126782  258995 system_pods.go:61] "kube-controller-manager-addons-257591" [22d9dc17-f59a-4138-935e-851d2a58643f] Running
	I1107 23:28:14.126804  258995 system_pods.go:61] "kube-ingress-dns-minikube" [9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:28:14.126822  258995 system_pods.go:61] "kube-proxy-4dmv5" [b9c22784-1166-46a9-b4ce-64dfc8f8a8ba] Running
	I1107 23:28:14.126842  258995 system_pods.go:61] "kube-scheduler-addons-257591" [ccad453b-0ff9-4805-b48c-a7710140ff69] Running
	I1107 23:28:14.126871  258995 system_pods.go:61] "metrics-server-7c66d45ddc-cg26p" [45c4bdce-6aef-4f79-907d-486e408dab7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:28:14.126901  258995 system_pods.go:61] "nvidia-device-plugin-daemonset-9gvwv" [0b930239-b130-4c92-8be6-38b48109e2e7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:28:14.126923  258995 system_pods.go:61] "registry-proxy-7psnh" [efe1bce4-fa1e-4767-9f02-ea2ec2980490] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:28:14.126946  258995 system_pods.go:61] "registry-zhkpz" [ce9558ec-5e12-4932-9d39-c4f87f0d8ed1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:28:14.126977  258995 system_pods.go:61] "snapshot-controller-58dbcc7b99-h58hk" [45655c0a-2198-42d0-a007-ea197ac2f4a5] Running
	I1107 23:28:14.127004  258995 system_pods.go:61] "snapshot-controller-58dbcc7b99-pvgsl" [705ed3da-fa63-43a9-ac8e-1d09b0608cb8] Running
	I1107 23:28:14.127025  258995 system_pods.go:61] "storage-provisioner" [23af9c7e-0dc5-44f0-aead-489b50faf753] Running
	I1107 23:28:14.127046  258995 system_pods.go:74] duration metric: took 165.759598ms to wait for pod list to return data ...
	I1107 23:28:14.127067  258995 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:28:14.167629  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:14.315599  258995 default_sa.go:45] found service account: "default"
	I1107 23:28:14.315684  258995 default_sa.go:55] duration metric: took 188.582781ms for default service account to be created ...
	I1107 23:28:14.315719  258995 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:28:14.527566  258995 system_pods.go:86] 18 kube-system pods found
	I1107 23:28:14.527657  258995 system_pods.go:89] "coredns-5dd5756b68-mfz4n" [b09e69b5-3df8-4407-93d1-230494a22a84] Running
	I1107 23:28:14.527679  258995 system_pods.go:89] "csi-hostpath-attacher-0" [6f1e04e2-0e16-4b2c-88e7-45cdc76b989c] Running
	I1107 23:28:14.527718  258995 system_pods.go:89] "csi-hostpath-resizer-0" [b1f3f59d-cfcf-42a2-8d41-d30c485b3e8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:28:14.527745  258995 system_pods.go:89] "csi-hostpathplugin-mgtgt" [cd7dd3ad-c9ca-415f-b8ed-a7633daaf8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:28:14.527769  258995 system_pods.go:89] "etcd-addons-257591" [2bf2baff-98a3-4a5e-981d-c12fa0f1e783] Running
	I1107 23:28:14.527791  258995 system_pods.go:89] "kindnet-fgpk2" [2d92315e-eefa-46c0-acec-1a9e22b6b815] Running
	I1107 23:28:14.527827  258995 system_pods.go:89] "kube-apiserver-addons-257591" [566d85b4-2b23-48d3-8dba-194e0217fafb] Running
	I1107 23:28:14.527854  258995 system_pods.go:89] "kube-controller-manager-addons-257591" [22d9dc17-f59a-4138-935e-851d2a58643f] Running
	I1107 23:28:14.527880  258995 system_pods.go:89] "kube-ingress-dns-minikube" [9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:28:14.527901  258995 system_pods.go:89] "kube-proxy-4dmv5" [b9c22784-1166-46a9-b4ce-64dfc8f8a8ba] Running
	I1107 23:28:14.527932  258995 system_pods.go:89] "kube-scheduler-addons-257591" [ccad453b-0ff9-4805-b48c-a7710140ff69] Running
	I1107 23:28:14.527956  258995 system_pods.go:89] "metrics-server-7c66d45ddc-cg26p" [45c4bdce-6aef-4f79-907d-486e408dab7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:28:14.527978  258995 system_pods.go:89] "nvidia-device-plugin-daemonset-9gvwv" [0b930239-b130-4c92-8be6-38b48109e2e7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:28:14.528002  258995 system_pods.go:89] "registry-proxy-7psnh" [efe1bce4-fa1e-4767-9f02-ea2ec2980490] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:28:14.528036  258995 system_pods.go:89] "registry-zhkpz" [ce9558ec-5e12-4932-9d39-c4f87f0d8ed1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:28:14.528062  258995 system_pods.go:89] "snapshot-controller-58dbcc7b99-h58hk" [45655c0a-2198-42d0-a007-ea197ac2f4a5] Running
	I1107 23:28:14.528082  258995 system_pods.go:89] "snapshot-controller-58dbcc7b99-pvgsl" [705ed3da-fa63-43a9-ac8e-1d09b0608cb8] Running
	I1107 23:28:14.528100  258995 system_pods.go:89] "storage-provisioner" [23af9c7e-0dc5-44f0-aead-489b50faf753] Running
	I1107 23:28:14.528120  258995 system_pods.go:126] duration metric: took 212.3586ms to wait for k8s-apps to be running ...
	I1107 23:28:14.528156  258995 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:28:14.528237  258995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:28:14.555014  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:14.565117  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:14.566322  258995 system_svc.go:56] duration metric: took 38.158575ms WaitForService to wait for kubelet.
	I1107 23:28:14.566381  258995 kubeadm.go:581] duration metric: took 44.264815513s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:28:14.566417  258995 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:28:14.615578  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:14.667012  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:14.716357  258995 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:28:14.716392  258995 node_conditions.go:123] node cpu capacity is 2
	I1107 23:28:14.716405  258995 node_conditions.go:105] duration metric: took 149.966787ms to run NodePressure ...
	I1107 23:28:14.716418  258995 start.go:228] waiting for startup goroutines ...
	I1107 23:28:15.081157  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:15.081751  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:15.121615  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:15.168808  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:15.553034  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:15.565932  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:15.616122  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:15.667756  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:16.053484  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:16.067024  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:16.115559  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:16.167636  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:16.553338  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:16.565260  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:16.615137  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:16.669804  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:17.052553  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:17.064873  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:17.115617  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:17.168182  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:17.557288  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:17.566859  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:17.617390  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:17.667994  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:18.052097  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:18.065188  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:18.116159  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:18.169521  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:18.552417  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:18.564601  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:18.615293  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:18.674356  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:19.052119  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:19.064888  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:19.115355  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:19.170764  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:19.552066  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:19.564810  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:19.614733  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:19.667091  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:20.069440  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:20.078112  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:20.115873  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:20.167998  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:20.552917  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:20.565385  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:20.615654  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:20.669972  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:21.053663  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:21.065064  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:21.115229  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:21.167597  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:21.552760  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:21.564792  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:21.615661  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:21.668890  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:22.052041  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:22.065128  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:22.114682  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:22.167333  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:22.552329  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:22.566179  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:28:22.614850  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:22.667595  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:23.052651  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:23.064350  258995 kapi.go:107] duration metric: took 45.54404208s to wait for kubernetes.io/minikube-addons=registry ...
	I1107 23:28:23.114956  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:23.167956  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:23.551908  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:23.615617  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:23.667032  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:24.054088  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:24.115834  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:24.167193  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:24.552241  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:24.614837  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:24.668093  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:25.062375  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:25.121852  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:25.177163  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:25.552423  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:25.615140  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:28:25.667132  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:26.052606  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:26.116100  258995 kapi.go:107] duration metric: took 45.531916807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1107 23:28:26.118077  258995 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-257591 cluster.
	I1107 23:28:26.119697  258995 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1107 23:28:26.121334  258995 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1107 23:28:26.168263  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:26.552599  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:26.669918  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:27.053255  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:27.168533  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:27.552684  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:27.667702  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:28.052714  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:28.169063  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:28.552819  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:28.669444  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:29.052178  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:29.167669  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:29.552411  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:29.667407  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:30.055316  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:30.168601  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:30.552281  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:30.669187  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:31.051483  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:31.166720  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:31.552450  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:31.667148  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:32.051647  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:32.167370  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:32.552111  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:32.667854  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:33.052509  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:33.169407  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:33.554601  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:33.667538  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:34.054869  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:34.168471  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:34.552368  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:34.673510  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:35.053607  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:35.168943  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:35.552929  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:35.670852  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:36.053629  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:36.168251  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:36.554609  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:36.675736  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:37.052560  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:37.167335  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:37.553516  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:37.667446  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:38.052710  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:38.167093  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:38.552023  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:38.667755  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:39.052290  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:39.167865  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:39.553362  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:39.666936  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:40.056133  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:40.167940  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:40.552990  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:40.668083  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:41.052151  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:41.167916  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:41.552186  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:41.668370  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:42.052720  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:42.170393  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:42.555834  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:42.667785  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:43.055920  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:43.169707  258995 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:28:43.552825  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:43.667649  258995 kapi.go:107] duration metric: took 1m4.570935691s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1107 23:28:44.052675  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:44.552426  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:45.053284  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:45.551972  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:46.052341  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:46.552492  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:47.052295  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:47.552727  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:48.053160  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:48.552436  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:49.057570  258995 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:28:49.552497  258995 kapi.go:107] duration metric: took 1m12.030231533s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1107 23:28:49.554218  258995 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, metrics-server, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1107 23:28:49.555843  258995 addons.go:502] enable addons completed in 1m19.574115s: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner ingress-dns metrics-server storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1107 23:28:49.555888  258995 start.go:233] waiting for cluster config update ...
	I1107 23:28:49.555916  258995 start.go:242] writing updated cluster config ...
	I1107 23:28:49.556209  258995 ssh_runner.go:195] Run: rm -f paused
	I1107 23:28:49.712307  258995 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:28:49.715885  258995 out.go:177] * Done! kubectl is now configured to use "addons-257591" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	b5712c4e786d1       dd1b12fcb6097       6 seconds ago        Exited              hello-world-app                          2                   51484fde15e76       hello-world-app-5d77478584-q5mvw
	4f1d906fd72a7       fc9db2894f4e4       25 seconds ago       Exited              helper-pod                               0                   b93b863eb7705       helper-pod-delete-pvc-9e3ec8d5-6b02-4665-bda0-da43e0c8626d
	8b4a6e82e87ad       aae348c9fbd40       34 seconds ago       Running             nginx                                    0                   37993b684e7b3       nginx
	528f104afcfe5       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   c875974daa14d       csi-hostpathplugin-mgtgt
	db3515cd3e2ab       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   c875974daa14d       csi-hostpathplugin-mgtgt
	0379640d31bde       922312104da8a       About a minute ago   Running             liveness-probe                           0                   c875974daa14d       csi-hostpathplugin-mgtgt
	1ac20b3c30190       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   c875974daa14d       csi-hostpathplugin-mgtgt
	c9d55a14258ad       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   c875974daa14d       csi-hostpathplugin-mgtgt
	8b3c98f324f9a       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   5a3764e5528b5       local-path-provisioner-78b46b4d5c-zmxt7
	c338776248c6a       5743dc525f662       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   8eb89d6682214       nvidia-device-plugin-daemonset-9gvwv
	d8716dda04f22       2a5f29343eb03       About a minute ago   Running             gcp-auth                                 0                   c10bfbf2cf448       gcp-auth-d4c87556c-tq8zk
	521505c0285cd       af594c6a879f2       About a minute ago   Exited              patch                                    2                   e6a30ed7098d2       ingress-nginx-admission-patch-xzjqr
	ac1768baa6bbb       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   c875974daa14d       csi-hostpathplugin-mgtgt
	575cf91e67d9d       af594c6a879f2       About a minute ago   Exited              create                                   0                   ab868d427a20b       ingress-nginx-admission-create-2bpj5
	00e41ded43fc8       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   d817f9eb838c2       csi-hostpath-resizer-0
	47b50608374e0       72dffd26670ce       About a minute ago   Running             cloud-spanner-emulator                   0                   05ab0c03a924b       cloud-spanner-emulator-56665cdfc-fhtmc
	37ac75f5b22a2       97e04611ad434       About a minute ago   Running             coredns                                  0                   b31c9e6fb0321       coredns-5dd5756b68-mfz4n
	61e23453f27f7       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   af646ded095e7       csi-hostpath-attacher-0
	f50308248fff1       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   fff3037450433       snapshot-controller-58dbcc7b99-pvgsl
	ec68e6a026700       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   2534dccdce8e2       snapshot-controller-58dbcc7b99-h58hk
	9fb50c616ad84       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   d7f081ef383de       storage-provisioner
	c9d7864c2452b       a5dd5cdd6d3ef       2 minutes ago        Running             kube-proxy                               0                   d7b4e95fc5983       kube-proxy-4dmv5
	7d63f9a251ae5       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                              0                   ae94b55fe7482       kindnet-fgpk2
	f0e8ee8a083c2       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   3ef5c2ddfb9b9       etcd-addons-257591
	aec2822c8f207       537e9a59ee2fd       2 minutes ago        Running             kube-apiserver                           0                   4958521c28d0b       kube-apiserver-addons-257591
	4bdc15950f156       42a4e73724daa       2 minutes ago        Running             kube-scheduler                           0                   16f78173197d2       kube-scheduler-addons-257591
	81cde726d5b87       8276439b4f237       2 minutes ago        Running             kube-controller-manager                  0                   c4f4040ece6db       kube-controller-manager-addons-257591
	
	* 
	* ==> containerd <==
	* Nov 07 23:29:36 addons-257591 containerd[745]: time="2023-11-07T23:29:36.434385809Z" level=info msg="StartContainer for \"b5712c4e786d10965d08855e3fa0e80238619d37d5b716e1ee9f46a5274c965a\""
	Nov 07 23:29:36 addons-257591 containerd[745]: time="2023-11-07T23:29:36.502791096Z" level=info msg="StartContainer for \"b5712c4e786d10965d08855e3fa0e80238619d37d5b716e1ee9f46a5274c965a\" returns successfully"
	Nov 07 23:29:36 addons-257591 containerd[745]: time="2023-11-07T23:29:36.530633180Z" level=info msg="shim disconnected" id=b5712c4e786d10965d08855e3fa0e80238619d37d5b716e1ee9f46a5274c965a
	Nov 07 23:29:36 addons-257591 containerd[745]: time="2023-11-07T23:29:36.530695867Z" level=warning msg="cleaning up after shim disconnected" id=b5712c4e786d10965d08855e3fa0e80238619d37d5b716e1ee9f46a5274c965a namespace=k8s.io
	Nov 07 23:29:36 addons-257591 containerd[745]: time="2023-11-07T23:29:36.530706805Z" level=info msg="cleaning up dead shim"
	Nov 07 23:29:36 addons-257591 containerd[745]: time="2023-11-07T23:29:36.541400644Z" level=warning msg="cleanup warnings time=\"2023-11-07T23:29:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9384 runtime=io.containerd.runc.v2\n"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.224489279Z" level=info msg="Kill container \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\""
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.302445056Z" level=info msg="shim disconnected" id=bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.302513585Z" level=warning msg="cleaning up after shim disconnected" id=bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d namespace=k8s.io
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.302524301Z" level=info msg="cleaning up dead shim"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.314647053Z" level=warning msg="cleanup warnings time=\"2023-11-07T23:29:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9415 runtime=io.containerd.runc.v2\n"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.318328244Z" level=info msg="StopContainer for \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\" returns successfully"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.318962089Z" level=info msg="StopPodSandbox for \"28505b337b3bbdaf7f17e110a6acc670c9539bfa0c504142ed7d335928bf994b\""
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.319065761Z" level=info msg="Container to stop \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.352196033Z" level=info msg="shim disconnected" id=28505b337b3bbdaf7f17e110a6acc670c9539bfa0c504142ed7d335928bf994b
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.352257391Z" level=warning msg="cleaning up after shim disconnected" id=28505b337b3bbdaf7f17e110a6acc670c9539bfa0c504142ed7d335928bf994b namespace=k8s.io
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.352269354Z" level=info msg="cleaning up dead shim"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.363433995Z" level=warning msg="cleanup warnings time=\"2023-11-07T23:29:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9447 runtime=io.containerd.runc.v2\n"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.429546124Z" level=info msg="TearDown network for sandbox \"28505b337b3bbdaf7f17e110a6acc670c9539bfa0c504142ed7d335928bf994b\" successfully"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.429913722Z" level=info msg="StopPodSandbox for \"28505b337b3bbdaf7f17e110a6acc670c9539bfa0c504142ed7d335928bf994b\" returns successfully"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.440001794Z" level=info msg="RemoveContainer for \"cdade6128cfb66e3ba22840a51d6a31c73559b431685ef60d2c411458208af0a\""
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.453413553Z" level=info msg="RemoveContainer for \"cdade6128cfb66e3ba22840a51d6a31c73559b431685ef60d2c411458208af0a\" returns successfully"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.458565618Z" level=info msg="RemoveContainer for \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\""
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.467464121Z" level=info msg="RemoveContainer for \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\" returns successfully"
	Nov 07 23:29:37 addons-257591 containerd[745]: time="2023-11-07T23:29:37.468286938Z" level=error msg="ContainerStatus for \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\": not found"
	
	* 
	* ==> coredns [37ac75f5b22a25513676ec6844f5f2df06ea83b31579ef02f1645c7a32661748] <==
	* [INFO] 10.244.0.19:48490 - 12888 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096771s
	[INFO] 10.244.0.19:33220 - 16320 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002241752s
	[INFO] 10.244.0.19:48490 - 42668 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001612641s
	[INFO] 10.244.0.19:48490 - 5080 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003400503s
	[INFO] 10.244.0.19:33220 - 65102 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003530964s
	[INFO] 10.244.0.19:33220 - 26662 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00017559s
	[INFO] 10.244.0.19:48490 - 35449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000043233s
	[INFO] 10.244.0.19:34125 - 45149 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101128s
	[INFO] 10.244.0.19:34125 - 16165 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054605s
	[INFO] 10.244.0.19:34125 - 50075 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077751s
	[INFO] 10.244.0.19:34125 - 14997 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006889s
	[INFO] 10.244.0.19:34125 - 45696 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059323s
	[INFO] 10.244.0.19:34125 - 34471 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062465s
	[INFO] 10.244.0.19:34125 - 45611 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001037759s
	[INFO] 10.244.0.19:58799 - 39104 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076168s
	[INFO] 10.244.0.19:58799 - 57786 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065863s
	[INFO] 10.244.0.19:58799 - 14551 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000580274s
	[INFO] 10.244.0.19:34125 - 14897 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001073171s
	[INFO] 10.244.0.19:58799 - 25482 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000259478s
	[INFO] 10.244.0.19:34125 - 1179 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070909s
	[INFO] 10.244.0.19:58799 - 39632 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053243s
	[INFO] 10.244.0.19:58799 - 13263 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042256s
	[INFO] 10.244.0.19:58799 - 15596 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000913475s
	[INFO] 10.244.0.19:58799 - 41052 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00087135s
	[INFO] 10.244.0.19:58799 - 56304 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100873s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-257591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-257591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=addons-257591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_27_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-257591
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-257591"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:27:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-257591
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:29:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:29:19 +0000   Tue, 07 Nov 2023 23:27:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:29:19 +0000   Tue, 07 Nov 2023 23:27:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:29:19 +0000   Tue, 07 Nov 2023 23:27:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:29:19 +0000   Tue, 07 Nov 2023 23:27:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-257591
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e7c65b2f8684d489e883905a1a53541
	  System UUID:                9247095f-6dce-4567-9e27-7182c71e03e9
	  Boot ID:                    ed0b58e3-cdd8-477c-a723-0ef811cfaf0e
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-56665cdfc-fhtmc     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  default                     hello-world-app-5d77478584-q5mvw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-d4c87556c-tq8zk                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 coredns-5dd5756b68-mfz4n                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m12s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 csi-hostpathplugin-mgtgt                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 etcd-addons-257591                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m26s
	  kube-system                 kindnet-fgpk2                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m13s
	  kube-system                 kube-apiserver-addons-257591               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-controller-manager-addons-257591      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-proxy-4dmv5                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-scheduler-addons-257591               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 nvidia-device-plugin-daemonset-9gvwv       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 snapshot-controller-58dbcc7b99-h58hk       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 snapshot-controller-58dbcc7b99-pvgsl       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  local-path-storage          local-path-provisioner-78b46b4d5c-zmxt7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node addons-257591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node addons-257591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node addons-257591 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m26s                  kubelet          Node addons-257591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s                  kubelet          Node addons-257591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s                  kubelet          Node addons-257591 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m26s                  kubelet          Node addons-257591 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m26s                  kubelet          Node addons-257591 status is now: NodeReady
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m13s                  node-controller  Node addons-257591 event: Registered Node addons-257591 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000721] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000951] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=00000000c43da2f1
	[  +0.001087] FS-Cache: N-key=[8] '9d385c0100000000'
	[  +0.005299] FS-Cache: Duplicate cookie detected
	[  +0.000900] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001185] FS-Cache: O-cookie d=000000003c1b4ad3{9p.inode} n=000000006ccd097a
	[  +0.001372] FS-Cache: O-key=[8] '9d385c0100000000'
	[  +0.000884] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.001151] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=00000000327b8b53
	[  +0.001866] FS-Cache: N-key=[8] '9d385c0100000000'
	[  +3.308824] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=000000003c1b4ad3{9p.inode} n=00000000d7302a7e
	[  +0.001149] FS-Cache: O-key=[8] '9c385c0100000000'
	[  +0.000721] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=00000000c43da2f1
	[  +0.001091] FS-Cache: N-key=[8] '9c385c0100000000'
	[  +0.426329] FS-Cache: Duplicate cookie detected
	[  +0.000745] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=000000003c1b4ad3{9p.inode} n=00000000733b9062
	[  +0.001155] FS-Cache: O-key=[8] 'a5385c0100000000'
	[  +0.000718] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=0000000062316daf
	[  +0.001057] FS-Cache: N-key=[8] 'a5385c0100000000'
	[Nov 7 22:27] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [f0e8ee8a083c2946f59e1b62f52fda8e7ae9d10567f820c6c75fdf0efc31ab2d] <==
	* {"level":"info","ts":"2023-11-07T23:27:08.453446Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-07T23:27:08.472453Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-07T23:27:08.472552Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-07T23:27:08.453564Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-07T23:27:08.473266Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-07T23:27:08.453835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-07T23:27:08.473565Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-07T23:27:08.481272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-07T23:27:08.481473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-07T23:27:08.481611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-11-07T23:27:08.481711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-07T23:27:08.481801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-07T23:27:08.481882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-11-07T23:27:08.481972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-07T23:27:08.485368Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:27:08.48948Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-257591 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-07T23:27:08.489759Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:27:08.491065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-07T23:27:08.491671Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:27:08.492477Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:27:08.49262Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:27:08.492316Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:27:08.492359Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:27:08.497556Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-07T23:27:08.499029Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> gcp-auth [d8716dda04f22ce95ff0602916a3dc1c8cd01aba6bc07ac719118191d72d4de6] <==
	* 2023/11/07 23:28:25 GCP Auth Webhook started!
	2023/11/07 23:29:00 Ready to marshal response ...
	2023/11/07 23:29:00 Ready to write response ...
	2023/11/07 23:29:05 Ready to marshal response ...
	2023/11/07 23:29:05 Ready to write response ...
	2023/11/07 23:29:07 Ready to marshal response ...
	2023/11/07 23:29:07 Ready to write response ...
	2023/11/07 23:29:07 Ready to marshal response ...
	2023/11/07 23:29:07 Ready to write response ...
	2023/11/07 23:29:16 Ready to marshal response ...
	2023/11/07 23:29:16 Ready to write response ...
	2023/11/07 23:29:16 Ready to marshal response ...
	2023/11/07 23:29:16 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:29:43 up  2:08,  0 users,  load average: 2.05, 2.53, 2.48
	Linux addons-257591 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [7d63f9a251ae53bfdf1727cf1d29686d531f5371ce56d20fc8f0ece5c697ca66] <==
	* I1107 23:27:31.313764       1 main.go:146] kindnetd IP family: "ipv4"
	I1107 23:27:31.313780       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1107 23:28:01.643640       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1107 23:28:01.658362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:28:01.658403       1 main.go:227] handling current node
	I1107 23:28:11.670831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:28:11.670857       1 main.go:227] handling current node
	I1107 23:28:21.682469       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:28:21.682495       1 main.go:227] handling current node
	I1107 23:28:31.691862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:28:31.691891       1 main.go:227] handling current node
	I1107 23:28:41.705355       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:28:41.705381       1 main.go:227] handling current node
	I1107 23:28:51.714322       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:28:51.714352       1 main.go:227] handling current node
	I1107 23:29:01.727654       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:29:01.727687       1 main.go:227] handling current node
	I1107 23:29:11.732570       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:29:11.732804       1 main.go:227] handling current node
	I1107 23:29:21.744835       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:29:21.744873       1 main.go:227] handling current node
	I1107 23:29:31.750282       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:29:31.750317       1 main.go:227] handling current node
	I1107 23:29:41.761900       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:29:41.761933       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [aec2822c8f207a3351bf98345c5f026ac5ac238f4761f4f0b152e08525bd1ae5] <==
	* I1107 23:27:40.393883       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.67.230"}
	I1107 23:28:13.096308       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1107 23:28:36.765400       1 handler_proxy.go:93] no RequestInfo found in the context
	E1107 23:28:36.765442       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1107 23:28:36.765450       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1107 23:28:36.766624       1 handler_proxy.go:93] no RequestInfo found in the context
	E1107 23:28:36.766694       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1107 23:28:36.766707       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1107 23:28:46.242599       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.203.27:443: connect: connection refused
	W1107 23:28:46.242737       1 handler_proxy.go:93] no RequestInfo found in the context
	E1107 23:28:46.242787       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1107 23:28:46.248451       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1107 23:28:46.248641       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.203.27:443: connect: connection refused
	E1107 23:28:46.249278       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.203.27:443: connect: connection refused
	E1107 23:28:46.259366       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.203.27:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.203.27:443: connect: connection refused
	I1107 23:28:46.379624       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1107 23:29:01.387364       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1107 23:29:01.396068       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1107 23:29:02.425525       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1107 23:29:05.405777       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1107 23:29:05.726078       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.46.160"}
	I1107 23:29:16.877950       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.59.149"}
	E1107 23:29:33.273523       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	* 
	* ==> kube-controller-manager [81cde726d5b87b16d62c85634eeb6e0675e79b9489487085b299c6cea9a0f853] <==
	* W1107 23:29:11.455846       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:29:11.455880       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:29:11.498332       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1107 23:29:16.580354       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1107 23:29:16.606205       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-q5mvw"
	I1107 23:29:16.624477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.751639ms"
	I1107 23:29:16.656122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.583179ms"
	I1107 23:29:16.656221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.154µs"
	I1107 23:29:16.686002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.735µs"
	I1107 23:29:17.498206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="8.73µs"
	I1107 23:29:20.426742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.088µs"
	I1107 23:29:21.395727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.391µs"
	I1107 23:29:22.399574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="89.296µs"
	W1107 23:29:23.838383       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:29:23.838416       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:29:29.518142       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1107 23:29:29.518285       1 shared_informer.go:318] Caches are synced for resource quota
	I1107 23:29:29.985644       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1107 23:29:29.985727       1 shared_informer.go:318] Caches are synced for garbage collector
	I1107 23:29:34.170045       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1107 23:29:34.182762       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1107 23:29:34.183319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.228µs"
	I1107 23:29:37.451188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.596µs"
	W1107 23:29:38.746525       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:29:38.746666       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [c9d7864c2452bdaaac7e7da36eb59403fd4f4c36b68ee4a3deb84700422ff0a1] <==
	* I1107 23:27:31.622534       1 server_others.go:69] "Using iptables proxy"
	I1107 23:27:31.667064       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1107 23:27:31.753678       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1107 23:27:31.759499       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:27:31.759558       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1107 23:27:31.759568       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1107 23:27:31.759648       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:27:31.759888       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:27:31.759899       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:27:31.762343       1 config.go:188] "Starting service config controller"
	I1107 23:27:31.762358       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:27:31.762475       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:27:31.762480       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:27:31.763289       1 config.go:315] "Starting node config controller"
	I1107 23:27:31.763299       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:27:31.864686       1 shared_informer.go:318] Caches are synced for node config
	I1107 23:27:31.864717       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:27:31.864743       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [4bdc15950f156dd39e1d5f1b4a3cc8c7244e358be45e064f2a383a65108b6819] <==
	* W1107 23:27:14.401312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:27:14.402401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 23:27:14.401351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:27:14.402582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 23:27:14.401406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:27:14.401452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 23:27:14.401502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 23:27:14.401544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:27:14.401588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 23:27:14.401630       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:27:14.401673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:27:14.401757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 23:27:14.401801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 23:27:14.401837       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:27:14.402778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:27:14.402839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:27:14.402915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 23:27:14.403001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 23:27:14.403069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:27:14.403220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:27:14.403229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:27:14.403495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:27:14.403510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1107 23:27:14.403519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1107 23:27:16.091860       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 07 23:29:22 addons-257591 kubelet[1353]: I1107 23:29:22.387808    1353 scope.go:117] "RemoveContainer" containerID="cdade6128cfb66e3ba22840a51d6a31c73559b431685ef60d2c411458208af0a"
	Nov 07 23:29:22 addons-257591 kubelet[1353]: E1107 23:29:22.388646    1353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-q5mvw_default(d31c8dc2-dbc0-4f6c-af30-43b3f33c32a3)\"" pod="default/hello-world-app-5d77478584-q5mvw" podUID="d31c8dc2-dbc0-4f6c-af30-43b3f33c32a3"
	Nov 07 23:29:24 addons-257591 kubelet[1353]: I1107 23:29:24.413564    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="241e1f50-c675-4efd-96b9-47cd9698e2da" path="/var/lib/kubelet/pods/241e1f50-c675-4efd-96b9-47cd9698e2da/volumes"
	Nov 07 23:29:33 addons-257591 kubelet[1353]: I1107 23:29:33.049883    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2w7m\" (UniqueName: \"kubernetes.io/projected/9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8-kube-api-access-p2w7m\") pod \"9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8\" (UID: \"9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8\") "
	Nov 07 23:29:33 addons-257591 kubelet[1353]: I1107 23:29:33.055427    1353 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8-kube-api-access-p2w7m" (OuterVolumeSpecName: "kube-api-access-p2w7m") pod "9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8" (UID: "9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8"). InnerVolumeSpecName "kube-api-access-p2w7m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:29:33 addons-257591 kubelet[1353]: I1107 23:29:33.151215    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p2w7m\" (UniqueName: \"kubernetes.io/projected/9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8-kube-api-access-p2w7m\") on node \"addons-257591\" DevicePath \"\""
	Nov 07 23:29:33 addons-257591 kubelet[1353]: I1107 23:29:33.414187    1353 scope.go:117] "RemoveContainer" containerID="61cd21d593e0084020d4fa7682321102cd55594d784e119404894445691de9fa"
	Nov 07 23:29:34 addons-257591 kubelet[1353]: I1107 23:29:34.412678    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="13706ba5-808b-437c-9d5e-d9cd90c87d15" path="/var/lib/kubelet/pods/13706ba5-808b-437c-9d5e-d9cd90c87d15/volumes"
	Nov 07 23:29:34 addons-257591 kubelet[1353]: I1107 23:29:34.413056    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6064298c-617e-45c6-a8ba-2ae8ffa5c3e9" path="/var/lib/kubelet/pods/6064298c-617e-45c6-a8ba-2ae8ffa5c3e9/volumes"
	Nov 07 23:29:34 addons-257591 kubelet[1353]: I1107 23:29:34.413465    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8" path="/var/lib/kubelet/pods/9e44ceea-25f4-446f-b61e-5f4eb4d8b6e8/volumes"
	Nov 07 23:29:36 addons-257591 kubelet[1353]: I1107 23:29:36.409764    1353 scope.go:117] "RemoveContainer" containerID="cdade6128cfb66e3ba22840a51d6a31c73559b431685ef60d2c411458208af0a"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.436890    1353 scope.go:117] "RemoveContainer" containerID="cdade6128cfb66e3ba22840a51d6a31c73559b431685ef60d2c411458208af0a"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.437349    1353 scope.go:117] "RemoveContainer" containerID="b5712c4e786d10965d08855e3fa0e80238619d37d5b716e1ee9f46a5274c965a"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: E1107 23:29:37.437619    1353 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-q5mvw_default(d31c8dc2-dbc0-4f6c-af30-43b3f33c32a3)\"" pod="default/hello-world-app-5d77478584-q5mvw" podUID="d31c8dc2-dbc0-4f6c-af30-43b3f33c32a3"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.454859    1353 scope.go:117] "RemoveContainer" containerID="bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.467807    1353 scope.go:117] "RemoveContainer" containerID="bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: E1107 23:29:37.468507    1353 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\": not found" containerID="bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.468556    1353 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d"} err="failed to get container status \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb10bde6671df0798fd9c73dd4409db277cc6d28239d2893cdcff48ed03a7f7d\": not found"
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.479384    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/581010a7-640a-4ab1-933f-e41d8ac41dcf-webhook-cert\") pod \"581010a7-640a-4ab1-933f-e41d8ac41dcf\" (UID: \"581010a7-640a-4ab1-933f-e41d8ac41dcf\") "
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.479445    1353 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kctzz\" (UniqueName: \"kubernetes.io/projected/581010a7-640a-4ab1-933f-e41d8ac41dcf-kube-api-access-kctzz\") pod \"581010a7-640a-4ab1-933f-e41d8ac41dcf\" (UID: \"581010a7-640a-4ab1-933f-e41d8ac41dcf\") "
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.482549    1353 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581010a7-640a-4ab1-933f-e41d8ac41dcf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "581010a7-640a-4ab1-933f-e41d8ac41dcf" (UID: "581010a7-640a-4ab1-933f-e41d8ac41dcf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.488762    1353 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581010a7-640a-4ab1-933f-e41d8ac41dcf-kube-api-access-kctzz" (OuterVolumeSpecName: "kube-api-access-kctzz") pod "581010a7-640a-4ab1-933f-e41d8ac41dcf" (UID: "581010a7-640a-4ab1-933f-e41d8ac41dcf"). InnerVolumeSpecName "kube-api-access-kctzz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.579844    1353 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/581010a7-640a-4ab1-933f-e41d8ac41dcf-webhook-cert\") on node \"addons-257591\" DevicePath \"\""
	Nov 07 23:29:37 addons-257591 kubelet[1353]: I1107 23:29:37.579891    1353 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kctzz\" (UniqueName: \"kubernetes.io/projected/581010a7-640a-4ab1-933f-e41d8ac41dcf-kube-api-access-kctzz\") on node \"addons-257591\" DevicePath \"\""
	Nov 07 23:29:38 addons-257591 kubelet[1353]: I1107 23:29:38.412753    1353 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="581010a7-640a-4ab1-933f-e41d8ac41dcf" path="/var/lib/kubelet/pods/581010a7-640a-4ab1-933f-e41d8ac41dcf/volumes"
	
	* 
	* ==> storage-provisioner [9fb50c616ad844b9c55d6a63a6665f3f1dc7697a289c7b2a22a62bfcbb17f2ad] <==
	* I1107 23:27:35.567890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:27:35.596024       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:27:35.596157       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:27:35.608934       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:27:35.611432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7156c7ae-05dd-4dc8-92e0-66cb7993677f", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-257591_6f4101b8-2953-41fd-be4d-7c88f401939c became leader
	I1107 23:27:35.611471       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-257591_6f4101b8-2953-41fd-be4d-7c88f401939c!
	I1107 23:27:35.711669       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-257591_6f4101b8-2953-41fd-be4d-7c88f401939c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-257591 -n addons-257591
helpers_test.go:261: (dbg) Run:  kubectl --context addons-257591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (39.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (284.54963ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image load --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 image load --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr: (3.703933454s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-662509" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image load --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 image load --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr: (3.337510788s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-662509" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.119922692s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-662509
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image load --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 image load --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr: (3.402843944s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-662509" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image save gcr.io/google-containers/addon-resizer:functional-662509 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1107 23:35:56.610367  288130 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:35:56.613694  288130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:35:56.613719  288130 out.go:309] Setting ErrFile to fd 2...
	I1107 23:35:56.613727  288130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:35:56.614031  288130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:35:56.615007  288130 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:35:56.615139  288130 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:35:56.615816  288130 cli_runner.go:164] Run: docker container inspect functional-662509 --format={{.State.Status}}
	I1107 23:35:56.646459  288130 ssh_runner.go:195] Run: systemctl --version
	I1107 23:35:56.646549  288130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662509
	I1107 23:35:56.665105  288130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/functional-662509/id_rsa Username:docker}
	I1107 23:35:56.755575  288130 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1107 23:35:56.755634  288130 cache_images.go:254] Failed to load cached images for profile functional-662509. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1107 23:35:56.755658  288130 cache_images.go:262] succeeded pushing to: 
	I1107 23:35:56.755663  288130 cache_images.go:263] failed pushing to: functional-662509

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (52.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-537363 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-537363 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.394592608s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-537363 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-537363 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c7f58006-e647-4dc1-b8ab-fdd2c9077fc2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c7f58006-e647-4dc1-b8ab-fdd2c9077fc2] Running
E1107 23:38:49.734417  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.017657785s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-537363 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021060633s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons disable ingress-dns --alsologtostderr -v=1: (5.839632739s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons disable ingress --alsologtostderr -v=1
E1107 23:39:17.418719  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons disable ingress --alsologtostderr -v=1: (7.600629658s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-537363
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-537363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b334d22253a4a6fe8370f0df80c11cadc53ec9f7e4ffe330980f86667e0d3969",
	        "Created": "2023-11-07T23:37:01.389436665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:37:01.759563165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62753ecb37c4e3c5bf7b6c8d02fe88b543f553e92492fca245cded98b0d364dd",
	        "ResolvConfPath": "/var/lib/docker/containers/b334d22253a4a6fe8370f0df80c11cadc53ec9f7e4ffe330980f86667e0d3969/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b334d22253a4a6fe8370f0df80c11cadc53ec9f7e4ffe330980f86667e0d3969/hostname",
	        "HostsPath": "/var/lib/docker/containers/b334d22253a4a6fe8370f0df80c11cadc53ec9f7e4ffe330980f86667e0d3969/hosts",
	        "LogPath": "/var/lib/docker/containers/b334d22253a4a6fe8370f0df80c11cadc53ec9f7e4ffe330980f86667e0d3969/b334d22253a4a6fe8370f0df80c11cadc53ec9f7e4ffe330980f86667e0d3969-json.log",
	        "Name": "/ingress-addon-legacy-537363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-537363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-537363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ffca2733c6d42103866afac2b049b0a8d26fb6c3aac953c4f274b436dd312847-init/diff:/var/lib/docker/overlay2/2ff5362f4db529bcd8a3ee4777c017c39b79e4e950c43f9c0d154fe3648aa161/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffca2733c6d42103866afac2b049b0a8d26fb6c3aac953c4f274b436dd312847/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffca2733c6d42103866afac2b049b0a8d26fb6c3aac953c4f274b436dd312847/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffca2733c6d42103866afac2b049b0a8d26fb6c3aac953c4f274b436dd312847/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-537363",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-537363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-537363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-537363",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-537363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec1e98f6f3f2fe8c1bf91f062c50094b8385c65103522aace778a010b95ff5fb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec1e98f6f3f2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-537363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b334d22253a4",
	                        "ingress-addon-legacy-537363"
	                    ],
	                    "NetworkID": "d696d258a49bfa5164a3531bfcd741bf200bad28db6aeefd395643a0d3f5a31d",
	                    "EndpointID": "31a388e86f438e1d6fa9847c9e58cf482c66dca8de20eb79441fe714109aeed4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-537363 -n ingress-addon-legacy-537363
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-537363 logs -n 25: (1.412531881s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-662509                                                   | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-662509                                                   | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-662509                                                   | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-662509 ssh findmnt                                          | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-662509 ssh findmnt                                          | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-662509 ssh findmnt                                          | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-662509                                                   | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-662509 ssh pgrep                                            | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-662509 image build -t                                       | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | localhost/my-image:functional-662509                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-662509 image ls                                             | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	| image          | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-662509                                                      | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-662509                                                   | functional-662509           | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:36 UTC |
	| start          | -p ingress-addon-legacy-537363                                         | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:36 UTC | 07 Nov 23 23:38 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                         |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-537363                                            | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:38 UTC | 07 Nov 23 23:38 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-537363                                            | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:38 UTC | 07 Nov 23 23:38 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-537363                                            | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:38 UTC | 07 Nov 23 23:38 UTC |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-537363 ip                                         | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:38 UTC | 07 Nov 23 23:38 UTC |
	| addons         | ingress-addon-legacy-537363                                            | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:39 UTC | 07 Nov 23 23:39 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-537363                                            | ingress-addon-legacy-537363 | jenkins | v1.32.0 | 07 Nov 23 23:39 UTC | 07 Nov 23 23:39 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:36:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:36:40.104745  292042 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:36:40.104955  292042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:36:40.104968  292042 out.go:309] Setting ErrFile to fd 2...
	I1107 23:36:40.104974  292042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:36:40.105286  292042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:36:40.105793  292042 out.go:303] Setting JSON to false
	I1107 23:36:40.106745  292042 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8146,"bootTime":1699392054,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1107 23:36:40.106833  292042 start.go:138] virtualization:  
	I1107 23:36:40.109268  292042 out.go:177] * [ingress-addon-legacy-537363] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:36:40.111803  292042 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:36:40.111950  292042 notify.go:220] Checking for updates...
	I1107 23:36:40.115405  292042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:36:40.117649  292042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:36:40.119597  292042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1107 23:36:40.121299  292042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:36:40.123431  292042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:36:40.125534  292042 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:36:40.152027  292042 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:36:40.152179  292042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:36:40.239775  292042 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-07 23:36:40.230058089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:36:40.239882  292042 docker.go:295] overlay module found
	I1107 23:36:40.242948  292042 out.go:177] * Using the docker driver based on user configuration
	I1107 23:36:40.244672  292042 start.go:298] selected driver: docker
	I1107 23:36:40.244687  292042 start.go:902] validating driver "docker" against <nil>
	I1107 23:36:40.244700  292042 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:36:40.245366  292042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:36:40.329522  292042 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-07 23:36:40.319835974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:36:40.329681  292042 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:36:40.329897  292042 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:36:40.331552  292042 out.go:177] * Using Docker driver with root privileges
	I1107 23:36:40.333157  292042 cni.go:84] Creating CNI manager for ""
	I1107 23:36:40.333179  292042 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:36:40.333192  292042 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:36:40.333207  292042 start_flags.go:323] config:
	{Name:ingress-addon-legacy-537363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:36:40.335265  292042 out.go:177] * Starting control plane node ingress-addon-legacy-537363 in cluster ingress-addon-legacy-537363
	I1107 23:36:40.336915  292042 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1107 23:36:40.338688  292042 out.go:177] * Pulling base image ...
	I1107 23:36:40.340275  292042 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1107 23:36:40.340359  292042 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:36:40.358096  292042 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:36:40.358121  292042 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:36:40.423406  292042 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1107 23:36:40.423442  292042 cache.go:56] Caching tarball of preloaded images
	I1107 23:36:40.423624  292042 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1107 23:36:40.425747  292042 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1107 23:36:40.427814  292042 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:36:40.579648  292042 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1107 23:36:53.425300  292042 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:36:53.425404  292042 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:36:54.614527  292042 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1107 23:36:54.614935  292042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/config.json ...
	I1107 23:36:54.614969  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/config.json: {Name:mka5725c3aade74a6225465f16525fed6a51e936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:36:54.615153  292042 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:36:54.615230  292042 start.go:365] acquiring machines lock for ingress-addon-legacy-537363: {Name:mk59fe5ade8338a785226fa265098c425c6cb7d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:36:54.615286  292042 start.go:369] acquired machines lock for "ingress-addon-legacy-537363" in 39.696µs
	I1107 23:36:54.615314  292042 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-537363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537363 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 23:36:54.615393  292042 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:36:54.617477  292042 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1107 23:36:54.617704  292042 start.go:159] libmachine.API.Create for "ingress-addon-legacy-537363" (driver="docker")
	I1107 23:36:54.617729  292042 client.go:168] LocalClient.Create starting
	I1107 23:36:54.617797  292042 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem
	I1107 23:36:54.617834  292042 main.go:141] libmachine: Decoding PEM data...
	I1107 23:36:54.617853  292042 main.go:141] libmachine: Parsing certificate...
	I1107 23:36:54.617929  292042 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem
	I1107 23:36:54.617952  292042 main.go:141] libmachine: Decoding PEM data...
	I1107 23:36:54.617964  292042 main.go:141] libmachine: Parsing certificate...
	I1107 23:36:54.618318  292042 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-537363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:36:54.635504  292042 cli_runner.go:211] docker network inspect ingress-addon-legacy-537363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:36:54.635582  292042 network_create.go:281] running [docker network inspect ingress-addon-legacy-537363] to gather additional debugging logs...
	I1107 23:36:54.635599  292042 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-537363
	W1107 23:36:54.653488  292042 cli_runner.go:211] docker network inspect ingress-addon-legacy-537363 returned with exit code 1
	I1107 23:36:54.653526  292042 network_create.go:284] error running [docker network inspect ingress-addon-legacy-537363]: docker network inspect ingress-addon-legacy-537363: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-537363 not found
	I1107 23:36:54.653546  292042 network_create.go:286] output of [docker network inspect ingress-addon-legacy-537363]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-537363 not found
	
	** /stderr **
	I1107 23:36:54.653660  292042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:36:54.670758  292042 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40001540c0}
	I1107 23:36:54.670802  292042 network_create.go:124] attempt to create docker network ingress-addon-legacy-537363 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 23:36:54.670861  292042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-537363 ingress-addon-legacy-537363
	I1107 23:36:54.743707  292042 network_create.go:108] docker network ingress-addon-legacy-537363 192.168.49.0/24 created
	I1107 23:36:54.743766  292042 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-537363" container
	I1107 23:36:54.743839  292042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:36:54.760400  292042 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-537363 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-537363 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:36:54.779180  292042 oci.go:103] Successfully created a docker volume ingress-addon-legacy-537363
	I1107 23:36:54.779268  292042 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-537363-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-537363 --entrypoint /usr/bin/test -v ingress-addon-legacy-537363:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:36:56.343748  292042 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-537363-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-537363 --entrypoint /usr/bin/test -v ingress-addon-legacy-537363:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.564431142s)
	I1107 23:36:56.343777  292042 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-537363
	I1107 23:36:56.343797  292042 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1107 23:36:56.343822  292042 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:36:56.343922  292042 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-537363:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:37:01.297142  292042 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-537363:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.953148069s)
	I1107 23:37:01.297181  292042 kic.go:203] duration metric: took 4.953357 seconds to extract preloaded images to volume
	W1107 23:37:01.297367  292042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:37:01.297487  292042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:37:01.373021  292042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-537363 --name ingress-addon-legacy-537363 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-537363 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-537363 --network ingress-addon-legacy-537363 --ip 192.168.49.2 --volume ingress-addon-legacy-537363:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:37:01.771033  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Running}}
	I1107 23:37:01.794516  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Status}}
	I1107 23:37:01.818295  292042 cli_runner.go:164] Run: docker exec ingress-addon-legacy-537363 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:37:01.906562  292042 oci.go:144] the created container "ingress-addon-legacy-537363" has a running status.
	I1107 23:37:01.906600  292042 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa...
	I1107 23:37:02.454255  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:37:02.454304  292042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:37:02.480319  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Status}}
	I1107 23:37:02.509831  292042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:37:02.509855  292042 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-537363 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:37:02.607705  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Status}}
	I1107 23:37:02.653788  292042 machine.go:88] provisioning docker machine ...
	I1107 23:37:02.653819  292042 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-537363"
	I1107 23:37:02.653885  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:02.680175  292042 main.go:141] libmachine: Using SSH client type: native
	I1107 23:37:02.681327  292042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1107 23:37:02.681393  292042 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-537363 && echo "ingress-addon-legacy-537363" | sudo tee /etc/hostname
	I1107 23:37:02.682098  292042 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1107 23:37:05.824819  292042 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-537363
	
	I1107 23:37:05.824907  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:05.842962  292042 main.go:141] libmachine: Using SSH client type: native
	I1107 23:37:05.843378  292042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1107 23:37:05.843404  292042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-537363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-537363/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-537363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:37:05.970623  292042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:37:05.970698  292042 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-253150/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-253150/.minikube}
	I1107 23:37:05.970734  292042 ubuntu.go:177] setting up certificates
	I1107 23:37:05.970769  292042 provision.go:83] configureAuth start
	I1107 23:37:05.970872  292042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-537363
	I1107 23:37:05.988826  292042 provision.go:138] copyHostCerts
	I1107 23:37:05.988868  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-253150/.minikube/ca.pem
	I1107 23:37:05.988903  292042 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-253150/.minikube/ca.pem, removing ...
	I1107 23:37:05.988910  292042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-253150/.minikube/ca.pem
	I1107 23:37:05.988987  292042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-253150/.minikube/ca.pem (1078 bytes)
	I1107 23:37:05.989064  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-253150/.minikube/cert.pem
	I1107 23:37:05.989081  292042 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-253150/.minikube/cert.pem, removing ...
	I1107 23:37:05.989085  292042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-253150/.minikube/cert.pem
	I1107 23:37:05.989109  292042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-253150/.minikube/cert.pem (1123 bytes)
	I1107 23:37:05.989174  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-253150/.minikube/key.pem
	I1107 23:37:05.989190  292042 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-253150/.minikube/key.pem, removing ...
	I1107 23:37:05.989194  292042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-253150/.minikube/key.pem
	I1107 23:37:05.989219  292042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-253150/.minikube/key.pem (1675 bytes)
	I1107 23:37:05.989467  292042 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-537363 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-537363]
	I1107 23:37:06.458220  292042 provision.go:172] copyRemoteCerts
	I1107 23:37:06.458292  292042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:37:06.458339  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:06.476468  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:06.571757  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:37:06.571818  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:37:06.600063  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:37:06.600123  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1107 23:37:06.628557  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:37:06.628622  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:37:06.657371  292042 provision.go:86] duration metric: configureAuth took 686.572879ms
	I1107 23:37:06.657397  292042 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:37:06.657598  292042 config.go:182] Loaded profile config "ingress-addon-legacy-537363": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1107 23:37:06.657615  292042 machine.go:91] provisioned docker machine in 4.00380095s
	I1107 23:37:06.657626  292042 client.go:171] LocalClient.Create took 12.039890834s
	I1107 23:37:06.657639  292042 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-537363" took 12.039935141s
	I1107 23:37:06.657648  292042 start.go:300] post-start starting for "ingress-addon-legacy-537363" (driver="docker")
	I1107 23:37:06.657657  292042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:37:06.657726  292042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:37:06.657769  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:06.675688  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:06.768412  292042 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:37:06.772619  292042 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:37:06.772668  292042 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:37:06.772680  292042 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:37:06.772688  292042 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:37:06.772701  292042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-253150/.minikube/addons for local assets ...
	I1107 23:37:06.772770  292042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-253150/.minikube/files for local assets ...
	I1107 23:37:06.772852  292042 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/2584902.pem -> 2584902.pem in /etc/ssl/certs
	I1107 23:37:06.772865  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/2584902.pem -> /etc/ssl/certs/2584902.pem
	I1107 23:37:06.772976  292042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:37:06.783632  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/2584902.pem --> /etc/ssl/certs/2584902.pem (1708 bytes)
	I1107 23:37:06.812910  292042 start.go:303] post-start completed in 155.247978ms
	I1107 23:37:06.813429  292042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-537363
	I1107 23:37:06.831600  292042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/config.json ...
	I1107 23:37:06.831890  292042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:37:06.831932  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:06.850570  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:06.939432  292042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:37:06.945397  292042 start.go:128] duration metric: createHost completed in 12.329988585s
	I1107 23:37:06.945430  292042 start.go:83] releasing machines lock for "ingress-addon-legacy-537363", held for 12.330117618s
	I1107 23:37:06.945505  292042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-537363
	I1107 23:37:06.963005  292042 ssh_runner.go:195] Run: cat /version.json
	I1107 23:37:06.963054  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:06.963078  292042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:37:06.963138  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:06.986903  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:06.989567  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:07.281629  292042 ssh_runner.go:195] Run: systemctl --version
	I1107 23:37:07.287513  292042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:37:07.293189  292042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1107 23:37:07.324749  292042 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:37:07.324888  292042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:37:07.360033  292042 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:37:07.360056  292042 start.go:472] detecting cgroup driver to use...
	I1107 23:37:07.360089  292042 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:37:07.360145  292042 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1107 23:37:07.375132  292042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 23:37:07.388990  292042 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:37:07.389063  292042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:37:07.405536  292042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:37:07.422211  292042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:37:07.511726  292042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:37:07.615592  292042 docker.go:219] disabling docker service ...
	I1107 23:37:07.615710  292042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:37:07.638355  292042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:37:07.654181  292042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:37:07.763493  292042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:37:07.862993  292042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:37:07.877059  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:37:07.897398  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1107 23:37:07.910025  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1107 23:37:07.922900  292042 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1107 23:37:07.922982  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1107 23:37:07.935911  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:37:07.948994  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1107 23:37:07.962025  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 23:37:07.975436  292042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:37:07.987600  292042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1107 23:37:08.000686  292042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:37:08.012085  292042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:37:08.023542  292042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:37:08.124913  292042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 23:37:08.256744  292042 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1107 23:37:08.256852  292042 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1107 23:37:08.261904  292042 start.go:540] Will wait 60s for crictl version
	I1107 23:37:08.261971  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:08.266605  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:37:08.311128  292042 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1107 23:37:08.311228  292042 ssh_runner.go:195] Run: containerd --version
	I1107 23:37:08.337925  292042 ssh_runner.go:195] Run: containerd --version
	I1107 23:37:08.373495  292042 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.24 ...
	I1107 23:37:08.375214  292042 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-537363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:37:08.399388  292042 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1107 23:37:08.404396  292042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:37:08.418787  292042 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1107 23:37:08.418862  292042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:37:08.460772  292042 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:37:08.460850  292042 ssh_runner.go:195] Run: which lz4
	I1107 23:37:08.465452  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1107 23:37:08.465555  292042 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:37:08.470026  292042 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:37:08.470062  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1107 23:37:10.739852  292042 containerd.go:547] Took 2.274334 seconds to copy over tarball
	I1107 23:37:10.739981  292042 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:37:13.419016  292042 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.678980162s)
	I1107 23:37:13.419041  292042 containerd.go:554] Took 2.679116 seconds to extract the tarball
	I1107 23:37:13.419051  292042 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:37:13.506038  292042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:37:13.597705  292042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 23:37:13.733309  292042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:37:13.785663  292042 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:37:13.785691  292042 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 23:37:13.785729  292042 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:37:13.785933  292042 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:37:13.786026  292042 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:37:13.786109  292042 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:37:13.786187  292042 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:37:13.786258  292042 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1107 23:37:13.786320  292042 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:37:13.786385  292042 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1107 23:37:13.787678  292042 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:37:13.788114  292042 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:37:13.788270  292042 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:37:13.788392  292042 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1107 23:37:13.788516  292042 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:37:13.788629  292042 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:37:13.788755  292042 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1107 23:37:13.788868  292042 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1107 23:37:14.313827  292042 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.313983  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	I1107 23:37:14.325373  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W1107 23:37:14.344101  292042 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.344329  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W1107 23:37:14.352347  292042 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.352495  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W1107 23:37:14.355810  292042 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.356002  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W1107 23:37:14.377332  292042 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.377495  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W1107 23:37:14.379572  292042 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.379776  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W1107 23:37:14.753512  292042 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1107 23:37:14.753701  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:37:14.841103  292042 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1107 23:37:14.841254  292042 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:37:14.841388  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:14.841555  292042 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1107 23:37:14.841657  292042 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1107 23:37:14.841756  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.218574  292042 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1107 23:37:15.218655  292042 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1107 23:37:15.218685  292042 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1107 23:37:15.218724  292042 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:37:15.218764  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.218765  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.218817  292042 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1107 23:37:15.218880  292042 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:37:15.218936  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.299043  292042 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1107 23:37:15.299086  292042 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:37:15.299155  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.299245  292042 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1107 23:37:15.299270  292042 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:37:15.299309  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.314149  292042 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1107 23:37:15.314232  292042 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:37:15.314282  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:37:15.314328  292042 ssh_runner.go:195] Run: which crictl
	I1107 23:37:15.314242  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1107 23:37:15.314391  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:37:15.314448  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1107 23:37:15.314426  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:37:15.314507  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1107 23:37:15.314545  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:37:15.510867  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1107 23:37:15.510928  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1107 23:37:15.510984  292042 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:37:15.511054  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1107 23:37:15.511092  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1107 23:37:15.511135  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1107 23:37:15.511173  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1107 23:37:15.511227  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1107 23:37:15.569532  292042 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1107 23:37:15.569611  292042 cache_images.go:92] LoadImages completed in 1.78390655s
	W1107 23:37:15.569680  292042 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-253150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1107 23:37:15.569740  292042 ssh_runner.go:195] Run: sudo crictl info
	I1107 23:37:15.616376  292042 cni.go:84] Creating CNI manager for ""
	I1107 23:37:15.616399  292042 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:37:15.616428  292042 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:37:15.616447  292042 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-537363 NodeName:ingress-addon-legacy-537363 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1107 23:37:15.616581  292042 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-537363"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:37:15.616647  292042 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-537363 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:37:15.616723  292042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1107 23:37:15.627353  292042 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:37:15.627433  292042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:37:15.638161  292042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1107 23:37:15.662264  292042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1107 23:37:15.683883  292042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1107 23:37:15.705067  292042 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:37:15.709778  292042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:37:15.722834  292042 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363 for IP: 192.168.49.2
	I1107 23:37:15.722866  292042 certs.go:190] acquiring lock for shared ca certs: {Name:mk29255a37c97dfa8464e8fe04cc7357102af55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:15.723003  292042 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key
	I1107 23:37:15.723048  292042 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key
	I1107 23:37:15.723102  292042 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.key
	I1107 23:37:15.723119  292042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt with IP's: []
	I1107 23:37:16.553173  292042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt ...
	I1107 23:37:16.553207  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: {Name:mk62eb5574c877e9a081592f208f3e0ff809b467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:16.553438  292042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.key ...
	I1107 23:37:16.553455  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.key: {Name:mk2ae09d81e422655e5cfea7cf41b72eb8dc6927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:16.553542  292042 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key.dd3b5fb2
	I1107 23:37:16.553566  292042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:37:17.041775  292042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt.dd3b5fb2 ...
	I1107 23:37:17.041806  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt.dd3b5fb2: {Name:mk0c0a4e114890184713083aed0a2529d3486c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:17.041992  292042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key.dd3b5fb2 ...
	I1107 23:37:17.042008  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key.dd3b5fb2: {Name:mk4889059023c41e3291e7cf35994b9944766bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:17.042088  292042 certs.go:337] copying /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt
	I1107 23:37:17.042175  292042 certs.go:341] copying /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key
	I1107 23:37:17.042238  292042 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.key
	I1107 23:37:17.042250  292042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.crt with IP's: []
	I1107 23:37:17.306619  292042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.crt ...
	I1107 23:37:17.306656  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.crt: {Name:mk20ab63ff5b255edf90ccb0c04ec33de5450579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:17.306837  292042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.key ...
	I1107 23:37:17.306852  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.key: {Name:mk86e8f6d3aa6853a60185f26674f8c3233740bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:17.306931  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:37:17.306954  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:37:17.306967  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:37:17.306981  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:37:17.306995  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:37:17.307012  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:37:17.307027  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:37:17.307042  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:37:17.307097  292042 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/258490.pem (1338 bytes)
	W1107 23:37:17.307143  292042 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/258490_empty.pem, impossibly tiny 0 bytes
	I1107 23:37:17.307158  292042 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:37:17.307184  292042 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:37:17.307215  292042 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:37:17.307242  292042 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/home/jenkins/minikube-integration/17585-253150/.minikube/certs/key.pem (1675 bytes)
	I1107 23:37:17.307290  292042 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/2584902.pem (1708 bytes)
	I1107 23:37:17.307330  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:17.307344  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/certs/258490.pem -> /usr/share/ca-certificates/258490.pem
	I1107 23:37:17.307357  292042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/2584902.pem -> /usr/share/ca-certificates/2584902.pem
	I1107 23:37:17.307938  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:37:17.337801  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:37:17.368100  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:37:17.397522  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:37:17.426628  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:37:17.456074  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:37:17.485125  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:37:17.515607  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:37:17.545733  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:37:17.575105  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/certs/258490.pem --> /usr/share/ca-certificates/258490.pem (1338 bytes)
	I1107 23:37:17.604974  292042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/ssl/certs/2584902.pem --> /usr/share/ca-certificates/2584902.pem (1708 bytes)
	I1107 23:37:17.634270  292042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:37:17.655819  292042 ssh_runner.go:195] Run: openssl version
	I1107 23:37:17.663100  292042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:37:17.674740  292042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:17.679313  292042 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:17.679377  292042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:17.688061  292042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:37:17.699771  292042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/258490.pem && ln -fs /usr/share/ca-certificates/258490.pem /etc/ssl/certs/258490.pem"
	I1107 23:37:17.711527  292042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/258490.pem
	I1107 23:37:17.716381  292042 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:33 /usr/share/ca-certificates/258490.pem
	I1107 23:37:17.716456  292042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/258490.pem
	I1107 23:37:17.725574  292042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/258490.pem /etc/ssl/certs/51391683.0"
	I1107 23:37:17.737917  292042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2584902.pem && ln -fs /usr/share/ca-certificates/2584902.pem /etc/ssl/certs/2584902.pem"
	I1107 23:37:17.749681  292042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2584902.pem
	I1107 23:37:17.754453  292042 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:33 /usr/share/ca-certificates/2584902.pem
	I1107 23:37:17.754535  292042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2584902.pem
	I1107 23:37:17.763334  292042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2584902.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:37:17.775104  292042 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:37:17.779529  292042 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:37:17.779588  292042 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-537363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537363 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:37:17.779664  292042 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1107 23:37:17.779731  292042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:37:17.823296  292042 cri.go:89] found id: ""
	I1107 23:37:17.823417  292042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:37:17.834320  292042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:37:17.845162  292042 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:37:17.845320  292042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:37:17.856351  292042 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:37:17.856399  292042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:37:17.914528  292042 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1107 23:37:17.914998  292042 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:37:17.969051  292042 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:37:17.969209  292042 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1107 23:37:17.969293  292042 kubeadm.go:322] OS: Linux
	I1107 23:37:17.969373  292042 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:37:17.969475  292042 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:37:17.969543  292042 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:37:17.969624  292042 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:37:17.969700  292042 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:37:17.969775  292042 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:37:18.069946  292042 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:37:18.070164  292042 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:37:18.070283  292042 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:37:18.328085  292042 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:37:18.328214  292042 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:37:18.328259  292042 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:37:18.448287  292042 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:37:18.452827  292042 out.go:204]   - Generating certificates and keys ...
	I1107 23:37:18.452936  292042 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:37:18.453006  292042 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:37:18.724400  292042 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:37:21.413392  292042 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:37:21.919857  292042 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:37:22.276876  292042 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:37:22.550240  292042 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:37:22.550927  292042 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-537363 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:37:23.419278  292042 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:37:23.419774  292042 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-537363 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:37:24.368075  292042 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:37:24.878132  292042 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:37:25.305931  292042 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:37:25.306229  292042 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:37:26.352071  292042 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:37:27.198484  292042 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:37:28.064022  292042 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:37:28.785330  292042 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:37:28.786245  292042 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:37:28.788344  292042 out.go:204]   - Booting up control plane ...
	I1107 23:37:28.788442  292042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:37:28.802928  292042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:37:28.804498  292042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:37:28.805674  292042 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:37:28.808377  292042 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:37:40.310820  292042 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502370 seconds
	I1107 23:37:40.310934  292042 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:37:40.326036  292042 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:37:40.852789  292042 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:37:40.852937  292042 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-537363 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1107 23:37:41.361898  292042 kubeadm.go:322] [bootstrap-token] Using token: 9lbvic.hly3k3vkhz7t45l3
	I1107 23:37:41.363701  292042 out.go:204]   - Configuring RBAC rules ...
	I1107 23:37:41.363818  292042 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:37:41.372165  292042 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:37:41.383722  292042 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:37:41.386923  292042 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:37:41.389608  292042 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:37:41.393767  292042 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:37:41.404165  292042 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:37:41.695895  292042 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:37:41.817400  292042 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:37:41.819568  292042 kubeadm.go:322] 
	I1107 23:37:41.819640  292042 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:37:41.819646  292042 kubeadm.go:322] 
	I1107 23:37:41.819718  292042 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:37:41.819723  292042 kubeadm.go:322] 
	I1107 23:37:41.819746  292042 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:37:41.820317  292042 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:37:41.820413  292042 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:37:41.820457  292042 kubeadm.go:322] 
	I1107 23:37:41.820548  292042 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:37:41.820667  292042 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:37:41.820791  292042 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:37:41.820815  292042 kubeadm.go:322] 
	I1107 23:37:41.821141  292042 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:37:41.821232  292042 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:37:41.821239  292042 kubeadm.go:322] 
	I1107 23:37:41.821656  292042 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9lbvic.hly3k3vkhz7t45l3 \
	I1107 23:37:41.821761  292042 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31e24392fb732769393a2f48b7656045863010b5e31bad5114f11c508fcda3c9 \
	I1107 23:37:41.822050  292042 kubeadm.go:322]     --control-plane 
	I1107 23:37:41.822060  292042 kubeadm.go:322] 
	I1107 23:37:41.822453  292042 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:37:41.822464  292042 kubeadm.go:322] 
	I1107 23:37:41.822906  292042 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9lbvic.hly3k3vkhz7t45l3 \
	I1107 23:37:41.823270  292042 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31e24392fb732769393a2f48b7656045863010b5e31bad5114f11c508fcda3c9 
	I1107 23:37:41.832259  292042 kubeadm.go:322] W1107 23:37:17.913601    1106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 23:37:41.832467  292042 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1107 23:37:41.832564  292042 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:37:41.832688  292042 kubeadm.go:322] W1107 23:37:28.802765    1106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:37:41.832803  292042 kubeadm.go:322] W1107 23:37:28.804439    1106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:37:41.832818  292042 cni.go:84] Creating CNI manager for ""
	I1107 23:37:41.832826  292042 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:37:41.834690  292042 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:37:41.836301  292042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:37:41.842005  292042 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1107 23:37:41.842022  292042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:37:41.871057  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:37:42.375415  292042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:37:42.375556  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:42.375633  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=ingress-addon-legacy-537363 minikube.k8s.io/updated_at=2023_11_07T23_37_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:42.387873  292042 ops.go:34] apiserver oom_adj: -16
	I1107 23:37:42.512360  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:42.648283  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:43.246998  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:43.747098  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:44.246531  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:44.746800  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:45.247796  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:45.746590  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:46.246540  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:46.746720  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:47.246573  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:47.747435  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:48.246610  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:48.746491  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:49.246699  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:49.747314  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:50.246430  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:50.747462  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:51.246459  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:51.746507  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:52.247301  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:52.746393  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:53.246461  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:53.746501  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:54.247287  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:54.747233  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:55.247202  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:55.747309  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:56.247397  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:56.746396  292042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:37:56.973889  292042 kubeadm.go:1081] duration metric: took 14.598381597s to wait for elevateKubeSystemPrivileges.
	I1107 23:37:56.973919  292042 kubeadm.go:406] StartCluster complete in 39.19433525s
	I1107 23:37:56.973936  292042 settings.go:142] acquiring lock: {Name:mk0c44fb0eb9743c4797be21f306bacb6fb52d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:56.973996  292042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:37:56.974663  292042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/kubeconfig: {Name:mk8224b7929d8ccd4d6d2717b272fe897cc064e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:56.975363  292042 kapi.go:59] client config for ingress-addon-legacy-537363: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.key", CAFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:37:56.976682  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:37:56.976946  292042 config.go:182] Loaded profile config "ingress-addon-legacy-537363": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1107 23:37:56.976995  292042 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:37:56.977047  292042 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-537363"
	I1107 23:37:56.977067  292042 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-537363"
	I1107 23:37:56.977123  292042 host.go:66] Checking if "ingress-addon-legacy-537363" exists ...
	I1107 23:37:56.977649  292042 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:37:56.977687  292042 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-537363"
	I1107 23:37:56.977703  292042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-537363"
	I1107 23:37:56.978004  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Status}}
	I1107 23:37:56.978498  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Status}}
	I1107 23:37:57.026470  292042 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:37:57.030766  292042 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:37:57.030790  292042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:37:57.030878  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:57.035882  292042 kapi.go:59] client config for ingress-addon-legacy-537363: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.key", CAFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:37:57.036146  292042 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-537363"
	I1107 23:37:57.036180  292042 host.go:66] Checking if "ingress-addon-legacy-537363" exists ...
	I1107 23:37:57.036673  292042 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-537363 --format={{.State.Status}}
	I1107 23:37:57.050857  292042 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-537363" context rescaled to 1 replicas
	I1107 23:37:57.050899  292042 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1107 23:37:57.055538  292042 out.go:177] * Verifying Kubernetes components...
	I1107 23:37:57.057368  292042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:37:57.083247  292042 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:37:57.083268  292042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:37:57.083343  292042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-537363
	I1107 23:37:57.093248  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:57.126606  292042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/ingress-addon-legacy-537363/id_rsa Username:docker}
	I1107 23:37:57.350633  292042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:37:57.400630  292042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:37:57.401371  292042 kapi.go:59] client config for ingress-addon-legacy-537363: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.key", CAFile:"/home/jenkins/minikube-integration/17585-253150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdc10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:37:57.401645  292042 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-537363" to be "Ready" ...
	I1107 23:37:57.405396  292042 node_ready.go:49] node "ingress-addon-legacy-537363" has status "Ready":"True"
	I1107 23:37:57.405466  292042 node_ready.go:38] duration metric: took 3.806064ms waiting for node "ingress-addon-legacy-537363" to be "Ready" ...
	I1107 23:37:57.405495  292042 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:37:57.414585  292042 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-9sqff" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:57.435595  292042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:37:57.936610  292042 pod_ready.go:97] error getting pod "coredns-66bff467f8-9sqff" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-9sqff" not found
	I1107 23:37:57.936692  292042 pod_ready.go:81] duration metric: took 522.040548ms waiting for pod "coredns-66bff467f8-9sqff" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:57.936719  292042 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-9sqff" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-9sqff" not found
	I1107 23:37:57.936739  292042 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:58.012614  292042 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1107 23:37:58.135316  292042 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1107 23:37:58.137979  292042 addons.go:502] enable addons completed in 1.160970756s: enabled=[default-storageclass storage-provisioner]
	I1107 23:37:59.958833  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:01.958971  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:03.959359  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:06.458558  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:08.959141  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:11.458945  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:13.958466  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:15.958760  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:17.959268  292042 pod_ready.go:102] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:38:20.459402  292042 pod_ready.go:92] pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace has status "Ready":"True"
	I1107 23:38:20.459430  292042 pod_ready.go:81] duration metric: took 22.522663599s waiting for pod "coredns-66bff467f8-qq8tv" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.459443  292042 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.464517  292042 pod_ready.go:92] pod "etcd-ingress-addon-legacy-537363" in "kube-system" namespace has status "Ready":"True"
	I1107 23:38:20.464541  292042 pod_ready.go:81] duration metric: took 5.089436ms waiting for pod "etcd-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.464561  292042 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.469862  292042 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-537363" in "kube-system" namespace has status "Ready":"True"
	I1107 23:38:20.469889  292042 pod_ready.go:81] duration metric: took 5.319609ms waiting for pod "kube-apiserver-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.469902  292042 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.475389  292042 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-537363" in "kube-system" namespace has status "Ready":"True"
	I1107 23:38:20.475415  292042 pod_ready.go:81] duration metric: took 5.504852ms waiting for pod "kube-controller-manager-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.475427  292042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vk7n7" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.480553  292042 pod_ready.go:92] pod "kube-proxy-vk7n7" in "kube-system" namespace has status "Ready":"True"
	I1107 23:38:20.480580  292042 pod_ready.go:81] duration metric: took 5.145819ms waiting for pod "kube-proxy-vk7n7" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.480592  292042 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.654971  292042 request.go:629] Waited for 174.317649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-537363
	I1107 23:38:20.854084  292042 request.go:629] Waited for 196.309758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-537363
	I1107 23:38:20.856759  292042 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-537363" in "kube-system" namespace has status "Ready":"True"
	I1107 23:38:20.856786  292042 pod_ready.go:81] duration metric: took 376.185459ms waiting for pod "kube-scheduler-ingress-addon-legacy-537363" in "kube-system" namespace to be "Ready" ...
	I1107 23:38:20.856797  292042 pod_ready.go:38] duration metric: took 23.451276274s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:38:20.856810  292042 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:38:20.856876  292042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:38:20.870766  292042 api_server.go:72] duration metric: took 23.819834045s to wait for apiserver process to appear ...
	I1107 23:38:20.870794  292042 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:38:20.870811  292042 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1107 23:38:20.879887  292042 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1107 23:38:20.880813  292042 api_server.go:141] control plane version: v1.18.20
	I1107 23:38:20.880843  292042 api_server.go:131] duration metric: took 10.041717ms to wait for apiserver health ...
	I1107 23:38:20.880853  292042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:38:21.054076  292042 request.go:629] Waited for 173.142153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:38:21.061721  292042 system_pods.go:59] 8 kube-system pods found
	I1107 23:38:21.061760  292042 system_pods.go:61] "coredns-66bff467f8-qq8tv" [26432116-83e0-4209-bf75-54d54cbeab0f] Running
	I1107 23:38:21.061768  292042 system_pods.go:61] "etcd-ingress-addon-legacy-537363" [66b88972-01d2-418a-a684-b59cc8574879] Running
	I1107 23:38:21.061773  292042 system_pods.go:61] "kindnet-6xctc" [7abbdc0d-6c46-4119-930e-54ca08bf439d] Running
	I1107 23:38:21.061779  292042 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-537363" [8e5e224d-2872-4468-80b4-2a0e2bd7c5b8] Running
	I1107 23:38:21.061784  292042 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-537363" [eeedc7b5-c738-47a7-8194-0b3325283a89] Running
	I1107 23:38:21.061811  292042 system_pods.go:61] "kube-proxy-vk7n7" [1b4d2aef-d2c7-4b04-a988-f40670a3c470] Running
	I1107 23:38:21.061817  292042 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-537363" [f9c57510-8ed9-44aa-8e84-8f676b59052f] Running
	I1107 23:38:21.061829  292042 system_pods.go:61] "storage-provisioner" [28da3797-afe2-40ae-afb0-4f07c5c20b86] Running
	I1107 23:38:21.061835  292042 system_pods.go:74] duration metric: took 180.976766ms to wait for pod list to return data ...
	I1107 23:38:21.061850  292042 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:38:21.254292  292042 request.go:629] Waited for 192.343791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:38:21.256736  292042 default_sa.go:45] found service account: "default"
	I1107 23:38:21.256764  292042 default_sa.go:55] duration metric: took 194.899081ms for default service account to be created ...
	I1107 23:38:21.256775  292042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:38:21.454059  292042 request.go:629] Waited for 197.224304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:38:21.460152  292042 system_pods.go:86] 8 kube-system pods found
	I1107 23:38:21.460181  292042 system_pods.go:89] "coredns-66bff467f8-qq8tv" [26432116-83e0-4209-bf75-54d54cbeab0f] Running
	I1107 23:38:21.460189  292042 system_pods.go:89] "etcd-ingress-addon-legacy-537363" [66b88972-01d2-418a-a684-b59cc8574879] Running
	I1107 23:38:21.460194  292042 system_pods.go:89] "kindnet-6xctc" [7abbdc0d-6c46-4119-930e-54ca08bf439d] Running
	I1107 23:38:21.460199  292042 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-537363" [8e5e224d-2872-4468-80b4-2a0e2bd7c5b8] Running
	I1107 23:38:21.460204  292042 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-537363" [eeedc7b5-c738-47a7-8194-0b3325283a89] Running
	I1107 23:38:21.460209  292042 system_pods.go:89] "kube-proxy-vk7n7" [1b4d2aef-d2c7-4b04-a988-f40670a3c470] Running
	I1107 23:38:21.460215  292042 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-537363" [f9c57510-8ed9-44aa-8e84-8f676b59052f] Running
	I1107 23:38:21.460221  292042 system_pods.go:89] "storage-provisioner" [28da3797-afe2-40ae-afb0-4f07c5c20b86] Running
	I1107 23:38:21.460233  292042 system_pods.go:126] duration metric: took 203.452733ms to wait for k8s-apps to be running ...
	I1107 23:38:21.460244  292042 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:38:21.460309  292042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:38:21.474473  292042 system_svc.go:56] duration metric: took 14.21887ms WaitForService to wait for kubelet.
	I1107 23:38:21.474500  292042 kubeadm.go:581] duration metric: took 24.423574182s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:38:21.474520  292042 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:38:21.654931  292042 request.go:629] Waited for 180.32588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1107 23:38:21.657720  292042 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1107 23:38:21.657763  292042 node_conditions.go:123] node cpu capacity is 2
	I1107 23:38:21.657774  292042 node_conditions.go:105] duration metric: took 183.249597ms to run NodePressure ...
	I1107 23:38:21.657786  292042 start.go:228] waiting for startup goroutines ...
	I1107 23:38:21.657793  292042 start.go:233] waiting for cluster config update ...
	I1107 23:38:21.657806  292042 start.go:242] writing updated cluster config ...
	I1107 23:38:21.658089  292042 ssh_runner.go:195] Run: rm -f paused
	I1107 23:38:21.716994  292042 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1107 23:38:21.719050  292042 out.go:177] 
	W1107 23:38:21.720561  292042 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1107 23:38:21.722263  292042 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1107 23:38:21.724060  292042 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-537363" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	956fdc5b3cbcb       dd1b12fcb6097       12 seconds ago       Exited              hello-world-app           2                   249f169ccc2d1       hello-world-app-5f5d8b66bb-49cjt
	9c8f335f5f593       aae348c9fbd40       36 seconds ago       Running             nginx                     0                   18af6ac84e2a4       nginx
	ec892a1fd282b       d7f0cba3aa5bf       53 seconds ago       Exited              controller                0                   51117d0121d2f       ingress-nginx-controller-7fcf777cb7-96ms5
	07ce864faddce       a883f7fc35610       58 seconds ago       Exited              patch                     0                   b21ee254962f9       ingress-nginx-admission-patch-zsffp
	09658a772e3af       a883f7fc35610       59 seconds ago       Exited              create                    0                   7af3b1c549163       ingress-nginx-admission-create-f6rk9
	8990b84f8cdd7       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   c5d803b32b4bb       coredns-66bff467f8-qq8tv
	59923d3995646       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   e318d30c815b3       storage-provisioner
	06a064f0da90c       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   6eb9c73d24608       kindnet-6xctc
	3f4b5f15f51cd       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   25e4b6c009a84       kube-proxy-vk7n7
	c8a3532fd3572       095f37015706d       About a minute ago   Running             kube-scheduler            0                   a1857bedc286a       kube-scheduler-ingress-addon-legacy-537363
	8b09264b1dfad       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   bd0441f1ad7e1       kube-apiserver-ingress-addon-legacy-537363
	604dc422f2a5f       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   08bcd4ec46a8a       etcd-ingress-addon-legacy-537363
	e37b5c9cbe9a3       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   1b0285d09c3fd       kube-controller-manager-ingress-addon-legacy-537363
	
	* 
	* ==> containerd <==
	* Nov 07 23:39:13 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:13.387642237Z" level=info msg="StopPodSandbox for \"0b9d59d37698b6a8434ec60a1ef11e0b34d496df1147a3541c219325264fdc31\" returns successfully"
	Nov 07 23:39:16 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:16.319984802Z" level=info msg="StopContainer for \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" with timeout 2 (s)"
	Nov 07 23:39:16 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:16.320590623Z" level=info msg="Stop container \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" with signal terminated"
	Nov 07 23:39:16 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:16.324268225Z" level=info msg="StopContainer for \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" with timeout 2 (s)"
	Nov 07 23:39:16 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:16.344489402Z" level=info msg="Skipping the sending of signal terminated to container \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" because a prior stop with timeout>0 request already sent the signal"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.345205834Z" level=info msg="Kill container \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\""
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.345215204Z" level=info msg="Kill container \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\""
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.447338158Z" level=info msg="shim disconnected" id=ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.447522819Z" level=warning msg="cleaning up after shim disconnected" id=ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a namespace=k8s.io
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.447535627Z" level=info msg="cleaning up dead shim"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.459060870Z" level=warning msg="cleanup warnings time=\"2023-11-07T23:39:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4704 runtime=io.containerd.runc.v2\n"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.461688722Z" level=info msg="StopContainer for \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" returns successfully"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.461702941Z" level=info msg="StopContainer for \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" returns successfully"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.462358369Z" level=info msg="StopPodSandbox for \"51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0\""
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.462430466Z" level=info msg="Container to stop \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.462645888Z" level=info msg="StopPodSandbox for \"51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0\""
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.462687282Z" level=info msg="Container to stop \"ec892a1fd282b0488536d718bd21223ed67b8ce6826f59fadcc4996f3c27d63a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.502245136Z" level=info msg="shim disconnected" id=51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.502315099Z" level=warning msg="cleaning up after shim disconnected" id=51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0 namespace=k8s.io
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.502327128Z" level=info msg="cleaning up dead shim"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.513740184Z" level=warning msg="cleanup warnings time=\"2023-11-07T23:39:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4741 runtime=io.containerd.runc.v2\n"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.567541076Z" level=info msg="TearDown network for sandbox \"51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0\" successfully"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.567798350Z" level=info msg="StopPodSandbox for \"51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0\" returns successfully"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.570323197Z" level=info msg="TearDown network for sandbox \"51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0\" successfully"
	Nov 07 23:39:18 ingress-addon-legacy-537363 containerd[829]: time="2023-11-07T23:39:18.570390018Z" level=info msg="StopPodSandbox for \"51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0\" returns successfully"
	
	* 
	* ==> coredns [8990b84f8cdd792cf5818fa0ae07c67c4dc22bb5d33588b2607e6e509ea88756] <==
	* [INFO] 10.244.0.5:38941 - 21054 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003602s
	[INFO] 10.244.0.5:38941 - 57650 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001194469s
	[INFO] 10.244.0.5:53383 - 52735 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002234717s
	[INFO] 10.244.0.5:53383 - 17154 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002061929s
	[INFO] 10.244.0.5:38941 - 22856 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001705416s
	[INFO] 10.244.0.5:33606 - 37153 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046825s
	[INFO] 10.244.0.5:33606 - 16098 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056721s
	[INFO] 10.244.0.5:38941 - 45638 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000146516s
	[INFO] 10.244.0.5:53383 - 51738 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041205s
	[INFO] 10.244.0.5:47365 - 58602 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000052389s
	[INFO] 10.244.0.5:47365 - 36309 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042928s
	[INFO] 10.244.0.5:33606 - 19831 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045086s
	[INFO] 10.244.0.5:47365 - 39695 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044684s
	[INFO] 10.244.0.5:33606 - 7730 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060971s
	[INFO] 10.244.0.5:47365 - 38099 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038087s
	[INFO] 10.244.0.5:33606 - 39160 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039416s
	[INFO] 10.244.0.5:47365 - 19213 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037636s
	[INFO] 10.244.0.5:33606 - 46998 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056318s
	[INFO] 10.244.0.5:47365 - 1509 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036364s
	[INFO] 10.244.0.5:33606 - 42688 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001658451s
	[INFO] 10.244.0.5:47365 - 51285 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001242394s
	[INFO] 10.244.0.5:33606 - 60595 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00180477s
	[INFO] 10.244.0.5:47365 - 503 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00140107s
	[INFO] 10.244.0.5:33606 - 58640 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055408s
	[INFO] 10.244.0.5:47365 - 64247 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000206477s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-537363
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-537363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=ingress-addon-legacy-537363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_37_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:37:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-537363
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:39:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:39:15 +0000   Tue, 07 Nov 2023 23:37:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:39:15 +0000   Tue, 07 Nov 2023 23:37:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:39:15 +0000   Tue, 07 Nov 2023 23:37:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:39:15 +0000   Tue, 07 Nov 2023 23:37:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-537363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 00b2b27bf7144fafa5f135c0dc87d5fe
	  System UUID:                74a7f2f7-b859-4172-ad52-8a22692a042c
	  Boot ID:                    ed0b58e3-cdd8-477c-a723-0ef811cfaf0e
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-49cjt                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-qq8tv                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 etcd-ingress-addon-legacy-537363                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-6xctc                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-537363             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-537363    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-vk7n7                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-ingress-addon-legacy-537363             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  114s (x4 over 114s)  kubelet     Node ingress-addon-legacy-537363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x4 over 114s)  kubelet     Node ingress-addon-legacy-537363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x4 over 114s)  kubelet     Node ingress-addon-legacy-537363 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-537363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-537363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-537363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-537363 status is now: NodeReady
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001116] FS-Cache: O-key=[8] '973a5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=0000000005158d3b
	[  +0.001064] FS-Cache: N-key=[8] '973a5c0100000000'
	[  +0.002802] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001154] FS-Cache: O-cookie d=000000003c1b4ad3{9p.inode} n=0000000026487c86
	[  +0.001243] FS-Cache: O-key=[8] '973a5c0100000000'
	[  +0.000813] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001073] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=00000000df1635b4
	[  +0.001286] FS-Cache: N-key=[8] '973a5c0100000000'
	[  +2.497408] FS-Cache: Duplicate cookie detected
	[  +0.000713] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000964] FS-Cache: O-cookie d=000000003c1b4ad3{9p.inode} n=00000000327b8b53
	[  +0.001115] FS-Cache: O-key=[8] '963a5c0100000000'
	[  +0.000714] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=00000000c5e65d91
	[  +0.001068] FS-Cache: N-key=[8] '963a5c0100000000'
	[  +0.308572] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001109] FS-Cache: O-cookie d=000000003c1b4ad3{9p.inode} n=000000003fc1f9c4
	[  +0.001087] FS-Cache: O-key=[8] '9c3a5c0100000000'
	[  +0.000783] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001099] FS-Cache: N-cookie d=000000003c1b4ad3{9p.inode} n=0000000005158d3b
	[  +0.001086] FS-Cache: N-key=[8] '9c3a5c0100000000'
	
	* 
	* ==> etcd [604dc422f2a5f148797e6766fb031c62c32cf39817558432b0cd4f97355639cc] <==
	* raft2023/11/07 23:37:31 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/07 23:37:31 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/07 23:37:31 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-07 23:37:31.884320 W | auth: simple token is not cryptographically signed
	2023-11-07 23:37:31.898554 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-07 23:37:31.899145 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/07 23:37:31 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-07 23:37:31.899846 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-07 23:37:31.902559 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-07 23:37:31.903915 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-07 23:37:31.919813 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/07 23:37:32 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/07 23:37:32 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/07 23:37:32 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/07 23:37:32 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/07 23:37:32 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-07 23:37:32.817344 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-07 23:37:32.873386 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-07 23:37:32.909293 I | etcdserver: published {Name:ingress-addon-legacy-537363 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-07 23:37:32.929257 I | embed: ready to serve client requests
	2023-11-07 23:37:32.985245 I | embed: ready to serve client requests
	2023-11-07 23:37:33.054173 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-07 23:37:33.233308 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-07 23:37:33.801272 W | etcdserver: request "ID:8128024998984669443 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (395.877005ms) to execute
	2023-11-07 23:37:33.953382 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  23:39:24 up  2:18,  0 users,  load average: 1.04, 1.66, 2.07
	Linux ingress-addon-legacy-537363 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [06a064f0da90c46005266534ee39bca3c03bbf7d4ef3115d2a3a5c867c492621] <==
	* I1107 23:37:59.714829       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1107 23:37:59.714904       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1107 23:37:59.715024       1 main.go:116] setting mtu 1500 for CNI 
	I1107 23:37:59.715100       1 main.go:146] kindnetd IP family: "ipv4"
	I1107 23:37:59.715205       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1107 23:38:00.212574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:38:00.212959       1 main.go:227] handling current node
	I1107 23:38:10.318878       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:38:10.318907       1 main.go:227] handling current node
	I1107 23:38:20.328258       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:38:20.328289       1 main.go:227] handling current node
	I1107 23:38:30.336068       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:38:30.336095       1 main.go:227] handling current node
	I1107 23:38:40.342266       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:38:40.342296       1 main.go:227] handling current node
	I1107 23:38:50.352623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:38:50.352653       1 main.go:227] handling current node
	I1107 23:39:00.358850       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:39:00.358880       1 main.go:227] handling current node
	I1107 23:39:10.366117       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:39:10.366147       1 main.go:227] handling current node
	I1107 23:39:20.369465       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:39:20.369494       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [8b09264b1dfadea6af5633848b9d315f54b9ffbe3840ad0750dec7f92f2eff4d] <==
	* I1107 23:37:38.896892       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1107 23:37:38.896945       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:37:38.968284       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:37:38.970179       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1107 23:37:38.971369       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:37:39.666904       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1107 23:37:39.667210       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 23:37:39.687148       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1107 23:37:39.694381       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:37:39.694404       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1107 23:37:40.147003       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:37:40.190732       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1107 23:37:40.326965       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1107 23:37:40.328148       1 controller.go:609] quota admission added evaluator for: endpoints
	I1107 23:37:40.335387       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:37:41.137748       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1107 23:37:41.678706       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1107 23:37:41.781582       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1107 23:37:45.186767       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:37:56.619254       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1107 23:37:56.684646       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1107 23:38:22.628615       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1107 23:38:45.381779       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1107 23:39:16.334780       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1107 23:39:16.973034       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [e37b5c9cbe9a3e357cc27c9eb5c9c244483f80c4a397b5908c0409d9e3f8ec92] <==
	* I1107 23:37:56.809378       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1107 23:37:56.823249       1 shared_informer.go:230] Caches are synced for disruption 
	I1107 23:37:56.823271       1 disruption.go:339] Sending events to api server.
	I1107 23:37:56.823845       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1107 23:37:56.854557       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1107 23:37:56.924323       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1107 23:37:56.933793       1 shared_informer.go:230] Caches are synced for expand 
	I1107 23:37:56.939633       1 shared_informer.go:230] Caches are synced for stateful set 
	I1107 23:37:56.945537       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1107 23:37:56.984024       1 shared_informer.go:230] Caches are synced for attach detach 
	I1107 23:37:57.111427       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"937e74ba-862e-4eb9-9b01-16727bc1b451", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1107 23:37:57.129598       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b3223f49-0400-4794-aab4-4f157e0e5954", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-9sqff
	I1107 23:37:57.129706       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:37:57.169042       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:37:57.169061       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 23:37:57.170469       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:37:57.179116       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:38:22.573852       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"35c3e803-d10b-417b-8066-3d940bbf38e2", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1107 23:38:22.600391       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3a4ed8b9-661a-48e7-92c2-04c3eb6ea544", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-96ms5
	I1107 23:38:22.674231       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"16e2c879-461c-4f60-9ac4-017b782a15f5", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-f6rk9
	I1107 23:38:22.694933       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"45b62489-ea59-470d-a900-f66b1186e4df", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zsffp
	I1107 23:38:25.506381       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"16e2c879-461c-4f60-9ac4-017b782a15f5", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:38:25.545906       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"45b62489-ea59-470d-a900-f66b1186e4df", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:38:54.238998       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"61db05ac-c8ca-45fd-abdf-494e80eba64b", APIVersion:"apps/v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1107 23:38:54.250808       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"33f9fbeb-4ac3-471b-afed-18d6cba45846", APIVersion:"apps/v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-49cjt
	
	* 
	* ==> kube-proxy [3f4b5f15f51cdb2566dbecd4b8e0002c270beb65f5cd78d918fc00d65762ff3c] <==
	* W1107 23:37:57.584884       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1107 23:37:57.597269       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1107 23:37:57.597476       1 server_others.go:186] Using iptables Proxier.
	I1107 23:37:57.597908       1 server.go:583] Version: v1.18.20
	I1107 23:37:57.599179       1 config.go:315] Starting service config controller
	I1107 23:37:57.599398       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1107 23:37:57.599666       1 config.go:133] Starting endpoints config controller
	I1107 23:37:57.599755       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1107 23:37:57.699781       1 shared_informer.go:230] Caches are synced for service config 
	I1107 23:37:57.699943       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [c8a3532fd35725555dc8e883e592a60566fff13de326cb71066e1f1dcf8a42fe] <==
	* W1107 23:37:38.848527       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 23:37:38.848793       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 23:37:38.848921       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 23:37:38.849034       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 23:37:38.895984       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:37:38.896112       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:37:38.900251       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1107 23:37:38.905477       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:37:38.905577       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:37:38.905803       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1107 23:37:38.912252       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:37:38.912285       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:37:38.912499       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:37:38.912571       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:37:38.912632       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:37:38.912686       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:37:38.912758       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:37:38.912814       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:37:38.912871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:37:38.912924       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:37:38.912978       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:37:38.913598       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:37:39.738826       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:37:39.777816       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1107 23:37:40.405932       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 07 23:38:57 ingress-addon-legacy-537363 kubelet[1691]: E1107 23:38:57.612481    1691 pod_workers.go:191] Error syncing pod f3cbf78f-483a-4a1c-9b66-80be020a5803 ("kube-ingress-dns-minikube_kube-system(f3cbf78f-483a-4a1c-9b66-80be020a5803)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f3cbf78f-483a-4a1c-9b66-80be020a5803)"
	Nov 07 23:38:57 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:38:57.626272    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d48ffa70a7dcee348306921ed46a5f82c63b5f3c99e7cafad202f9c0338e3091
	Nov 07 23:38:58 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:38:58.632166    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d48ffa70a7dcee348306921ed46a5f82c63b5f3c99e7cafad202f9c0338e3091
	Nov 07 23:38:58 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:38:58.632451    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 89d6e4b97de4f9ddba384ca6c8bf1dc809eb82460c20ba566fc7a2860e751a67
	Nov 07 23:38:58 ingress-addon-legacy-537363 kubelet[1691]: E1107 23:38:58.632692    1691 pod_workers.go:191] Error syncing pod eca8d614-21a1-4cde-bb94-145c8e62ece4 ("hello-world-app-5f5d8b66bb-49cjt_default(eca8d614-21a1-4cde-bb94-145c8e62ece4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-49cjt_default(eca8d614-21a1-4cde-bb94-145c8e62ece4)"
	Nov 07 23:38:59 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:38:59.635754    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 89d6e4b97de4f9ddba384ca6c8bf1dc809eb82460c20ba566fc7a2860e751a67
	Nov 07 23:38:59 ingress-addon-legacy-537363 kubelet[1691]: E1107 23:38:59.636481    1691 pod_workers.go:191] Error syncing pod eca8d614-21a1-4cde-bb94-145c8e62ece4 ("hello-world-app-5f5d8b66bb-49cjt_default(eca8d614-21a1-4cde-bb94-145c8e62ece4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-49cjt_default(eca8d614-21a1-4cde-bb94-145c8e62ece4)"
	Nov 07 23:39:10 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:10.205057    1691 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6kmnp" (UniqueName: "kubernetes.io/secret/f3cbf78f-483a-4a1c-9b66-80be020a5803-minikube-ingress-dns-token-6kmnp") pod "f3cbf78f-483a-4a1c-9b66-80be020a5803" (UID: "f3cbf78f-483a-4a1c-9b66-80be020a5803")
	Nov 07 23:39:10 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:10.209248    1691 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3cbf78f-483a-4a1c-9b66-80be020a5803-minikube-ingress-dns-token-6kmnp" (OuterVolumeSpecName: "minikube-ingress-dns-token-6kmnp") pod "f3cbf78f-483a-4a1c-9b66-80be020a5803" (UID: "f3cbf78f-483a-4a1c-9b66-80be020a5803"). InnerVolumeSpecName "minikube-ingress-dns-token-6kmnp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:39:10 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:10.305454    1691 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6kmnp" (UniqueName: "kubernetes.io/secret/f3cbf78f-483a-4a1c-9b66-80be020a5803-minikube-ingress-dns-token-6kmnp") on node "ingress-addon-legacy-537363" DevicePath ""
	Nov 07 23:39:11 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:11.383105    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 89d6e4b97de4f9ddba384ca6c8bf1dc809eb82460c20ba566fc7a2860e751a67
	Nov 07 23:39:11 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:11.659274    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: cd32f374d02eabe9f5fe18764ac74e8879a4f2b62ecc9229c8ff85002a1bef19
	Nov 07 23:39:11 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:11.661836    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 956fdc5b3cbcb3273aab0dbe3ec192d700bac8f91467012fcea74a510c1ec139
	Nov 07 23:39:11 ingress-addon-legacy-537363 kubelet[1691]: E1107 23:39:11.662100    1691 pod_workers.go:191] Error syncing pod eca8d614-21a1-4cde-bb94-145c8e62ece4 ("hello-world-app-5f5d8b66bb-49cjt_default(eca8d614-21a1-4cde-bb94-145c8e62ece4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-49cjt_default(eca8d614-21a1-4cde-bb94-145c8e62ece4)"
	Nov 07 23:39:11 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:11.672990    1691 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 89d6e4b97de4f9ddba384ca6c8bf1dc809eb82460c20ba566fc7a2860e751a67
	Nov 07 23:39:16 ingress-addon-legacy-537363 kubelet[1691]: E1107 23:39:16.323015    1691 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-96ms5.17957b9ec9726d49", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-96ms5", UID:"4c22edcd-66b3-4473-89f3-ce857f3e0660", APIVersion:"v1", ResourceVersion:"482", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-537363"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14ad08912e50549, ext:94696195502, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14ad08912e50549, ext:94696195502, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-96ms5.17957b9ec9726d49" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:39:16 ingress-addon-legacy-537363 kubelet[1691]: E1107 23:39:16.329609    1691 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-96ms5.17957b9ec9726d49", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-96ms5", UID:"4c22edcd-66b3-4473-89f3-ce857f3e0660", APIVersion:"v1", ResourceVersion:"482", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-537363"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14ad08912e50549, ext:94696195502, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14ad0891347c8e5, ext:94702668106, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-96ms5.17957b9ec9726d49" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:39:18 ingress-addon-legacy-537363 kubelet[1691]: W1107 23:39:18.690267    1691 pod_container_deletor.go:77] Container "51117d0121d2f0603baf543a7190991db0dcf79679daf64db6168fef2d96aab0" not found in pod's containers
	Nov 07 23:39:20 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:20.439173    1691 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-2kkwv" (UniqueName: "kubernetes.io/secret/4c22edcd-66b3-4473-89f3-ce857f3e0660-ingress-nginx-token-2kkwv") pod "4c22edcd-66b3-4473-89f3-ce857f3e0660" (UID: "4c22edcd-66b3-4473-89f3-ce857f3e0660")
	Nov 07 23:39:20 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:20.439240    1691 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4c22edcd-66b3-4473-89f3-ce857f3e0660-webhook-cert") pod "4c22edcd-66b3-4473-89f3-ce857f3e0660" (UID: "4c22edcd-66b3-4473-89f3-ce857f3e0660")
	Nov 07 23:39:20 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:20.445949    1691 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c22edcd-66b3-4473-89f3-ce857f3e0660-ingress-nginx-token-2kkwv" (OuterVolumeSpecName: "ingress-nginx-token-2kkwv") pod "4c22edcd-66b3-4473-89f3-ce857f3e0660" (UID: "4c22edcd-66b3-4473-89f3-ce857f3e0660"). InnerVolumeSpecName "ingress-nginx-token-2kkwv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:39:20 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:20.446139    1691 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c22edcd-66b3-4473-89f3-ce857f3e0660-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4c22edcd-66b3-4473-89f3-ce857f3e0660" (UID: "4c22edcd-66b3-4473-89f3-ce857f3e0660"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:39:20 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:20.539590    1691 reconciler.go:319] Volume detached for volume "ingress-nginx-token-2kkwv" (UniqueName: "kubernetes.io/secret/4c22edcd-66b3-4473-89f3-ce857f3e0660-ingress-nginx-token-2kkwv") on node "ingress-addon-legacy-537363" DevicePath ""
	Nov 07 23:39:20 ingress-addon-legacy-537363 kubelet[1691]: I1107 23:39:20.539640    1691 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4c22edcd-66b3-4473-89f3-ce857f3e0660-webhook-cert") on node "ingress-addon-legacy-537363" DevicePath ""
	Nov 07 23:39:21 ingress-addon-legacy-537363 kubelet[1691]: W1107 23:39:21.397910    1691 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/4c22edcd-66b3-4473-89f3-ce857f3e0660/volumes" does not exist
	
	* 
	* ==> storage-provisioner [59923d39956469ff62e775fefdef20afb3c10ab3d07c3a3682914867000de7bb] <==
	* I1107 23:38:01.326728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:38:01.339141       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:38:01.339234       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:38:01.347006       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:38:01.347879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-537363_3a2abe59-3ae0-4882-b58f-e675209c84ad!
	I1107 23:38:01.348862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"080280fa-02da-49c8-9f77-9c16f44bd445", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-537363_3a2abe59-3ae0-4882-b58f-e675209c84ad became leader
	I1107 23:38:01.448077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-537363_3a2abe59-3ae0-4882-b58f-e675209c84ad!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-537363 -n ingress-addon-legacy-537363
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-537363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (52.85s)

                                                
                                    

Test pass (272/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 30.02
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.3/json-events 13.91
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.62
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
25 TestAddons/Setup 142.3
27 TestAddons/parallel/Registry 14.67
29 TestAddons/parallel/InspektorGadget 10.88
30 TestAddons/parallel/MetricsServer 6.03
33 TestAddons/parallel/CSI 59.42
34 TestAddons/parallel/Headlamp 10.29
35 TestAddons/parallel/CloudSpanner 5.73
36 TestAddons/parallel/LocalPath 53.79
37 TestAddons/parallel/NvidiaDevicePlugin 5.59
40 TestAddons/serial/GCPAuth/Namespaces 0.35
41 TestAddons/StoppedEnableDisable 12.45
42 TestCertOptions 36.71
43 TestCertExpiration 230.77
45 TestForceSystemdFlag 43.76
46 TestForceSystemdEnv 43.56
47 TestDockerEnvContainerd 47.45
52 TestErrorSpam/setup 30.09
53 TestErrorSpam/start 0.87
54 TestErrorSpam/status 1.12
55 TestErrorSpam/pause 1.9
56 TestErrorSpam/unpause 1.99
57 TestErrorSpam/stop 1.51
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 82.87
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 6.19
64 TestFunctional/serial/KubeContext 0.07
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 5
69 TestFunctional/serial/CacheCmd/cache/add_local 1.79
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.39
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.56
74 TestFunctional/serial/CacheCmd/cache/delete 0.29
75 TestFunctional/serial/MinikubeKubectlCmd 0.22
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
77 TestFunctional/serial/ExtraConfig 43.54
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.98
80 TestFunctional/serial/LogsFileCmd 1.94
81 TestFunctional/serial/InvalidService 4.36
83 TestFunctional/parallel/ConfigCmd 0.72
84 TestFunctional/parallel/DashboardCmd 10.88
85 TestFunctional/parallel/DryRun 0.65
86 TestFunctional/parallel/InternationalLanguage 0.35
87 TestFunctional/parallel/StatusCmd 1.3
91 TestFunctional/parallel/ServiceCmdConnect 7.73
92 TestFunctional/parallel/AddonsCmd 0.22
93 TestFunctional/parallel/PersistentVolumeClaim 27.73
95 TestFunctional/parallel/SSHCmd 0.86
96 TestFunctional/parallel/CpCmd 1.69
98 TestFunctional/parallel/FileSync 0.48
99 TestFunctional/parallel/CertSync 2.24
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
108 TestFunctional/parallel/Version/short 0.08
109 TestFunctional/parallel/Version/components 0.96
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
114 TestFunctional/parallel/ImageCommands/ImageBuild 3.65
115 TestFunctional/parallel/ImageCommands/Setup 2.39
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.37
123 TestFunctional/parallel/ServiceCmd/List 0.48
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
126 TestFunctional/parallel/ServiceCmd/Format 0.57
127 TestFunctional/parallel/ServiceCmd/URL 0.53
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.74
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.56
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.77
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
144 TestFunctional/parallel/ProfileCmd/profile_list 0.48
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
146 TestFunctional/parallel/MountCmd/any-port 7.2
147 TestFunctional/parallel/MountCmd/specific-port 2.46
148 TestFunctional/parallel/MountCmd/VerifyCleanup 2.46
149 TestFunctional/delete_addon-resizer_images 0.08
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 101.75
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.02
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.72
162 TestJSONOutput/start/Command 82.93
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.84
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.76
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.87
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.27
187 TestKicCustomNetwork/create_custom_network 43.94
188 TestKicCustomNetwork/use_default_bridge_network 37.35
189 TestKicExistingNetwork 35.76
190 TestKicCustomSubnet 34.24
191 TestKicStaticIP 34.61
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 71.13
196 TestMountStart/serial/StartWithMountFirst 9.27
197 TestMountStart/serial/VerifyMountFirst 0.29
198 TestMountStart/serial/StartWithMountSecond 7.1
199 TestMountStart/serial/VerifyMountSecond 0.3
200 TestMountStart/serial/DeleteFirst 1.71
201 TestMountStart/serial/VerifyMountPostDelete 0.3
202 TestMountStart/serial/Stop 1.24
203 TestMountStart/serial/RestartStopped 7.74
204 TestMountStart/serial/VerifyMountPostStop 0.29
207 TestMultiNode/serial/FreshStart2Nodes 73.58
208 TestMultiNode/serial/DeployApp2Nodes 4.67
209 TestMultiNode/serial/PingHostFrom2Pods 1.24
210 TestMultiNode/serial/AddNode 18.06
211 TestMultiNode/serial/ProfileList 0.37
212 TestMultiNode/serial/CopyFile 11.65
213 TestMultiNode/serial/StopNode 2.46
214 TestMultiNode/serial/StartAfterStop 12.28
215 TestMultiNode/serial/RestartKeepsNodes 122.55
216 TestMultiNode/serial/DeleteNode 5.34
217 TestMultiNode/serial/StopMultiNode 24.29
218 TestMultiNode/serial/RestartMultiNode 78.81
219 TestMultiNode/serial/ValidateNameConflict 32.21
224 TestPreload 173.72
226 TestScheduledStopUnix 109.92
229 TestInsufficientStorage 11.2
230 TestRunningBinaryUpgrade 65.24
232 TestKubernetesUpgrade 385.26
233 TestMissingContainerUpgrade 215.1
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 38.17
237 TestNoKubernetes/serial/StartWithStopK8s 19.12
238 TestNoKubernetes/serial/Start 5.83
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
240 TestNoKubernetes/serial/ProfileList 0.64
241 TestNoKubernetes/serial/Stop 1.31
242 TestNoKubernetes/serial/StartNoArgs 7.86
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
244 TestStoppedBinaryUpgrade/Setup 1.73
245 TestStoppedBinaryUpgrade/Upgrade 109.7
246 TestStoppedBinaryUpgrade/MinikubeLogs 1.55
255 TestPause/serial/Start 67.49
256 TestPause/serial/SecondStartNoReconfiguration 8.01
257 TestPause/serial/Pause 1.06
258 TestPause/serial/VerifyStatus 0.47
259 TestPause/serial/Unpause 0.95
260 TestPause/serial/PauseAgain 1.25
261 TestPause/serial/DeletePaused 2.75
262 TestPause/serial/VerifyDeletedResources 0.5
270 TestNetworkPlugins/group/false 6.66
275 TestStartStop/group/old-k8s-version/serial/FirstStart 125.39
276 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
278 TestStartStop/group/old-k8s-version/serial/Stop 12.22
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
280 TestStartStop/group/old-k8s-version/serial/SecondStart 661.48
282 TestStartStop/group/no-preload/serial/FirstStart 85.56
283 TestStartStop/group/no-preload/serial/DeployApp 8.5
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.3
285 TestStartStop/group/no-preload/serial/Stop 12.22
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
287 TestStartStop/group/no-preload/serial/SecondStart 338.01
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.19
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.41
291 TestStartStop/group/no-preload/serial/Pause 3.64
293 TestStartStop/group/embed-certs/serial/FirstStart 59.9
294 TestStartStop/group/embed-certs/serial/DeployApp 8.46
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
296 TestStartStop/group/embed-certs/serial/Stop 12.12
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
298 TestStartStop/group/embed-certs/serial/SecondStart 339.03
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
302 TestStartStop/group/old-k8s-version/serial/Pause 3.49
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.98
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.19
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.6
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
313 TestStartStop/group/embed-certs/serial/Pause 3.66
315 TestStartStop/group/newest-cni/serial/FirstStart 46.03
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
318 TestStartStop/group/newest-cni/serial/Stop 1.28
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
320 TestStartStop/group/newest-cni/serial/SecondStart 33.88
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
324 TestStartStop/group/newest-cni/serial/Pause 3.56
325 TestNetworkPlugins/group/auto/Start 84.24
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.03
327 TestNetworkPlugins/group/auto/KubeletFlags 0.34
328 TestNetworkPlugins/group/auto/NetCatPod 12.37
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.56
332 TestNetworkPlugins/group/auto/DNS 0.27
333 TestNetworkPlugins/group/kindnet/Start 96.37
334 TestNetworkPlugins/group/auto/Localhost 0.26
335 TestNetworkPlugins/group/auto/HairPin 0.27
336 TestNetworkPlugins/group/calico/Start 68.52
337 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
338 TestNetworkPlugins/group/calico/ControllerPod 5.04
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
340 TestNetworkPlugins/group/kindnet/NetCatPod 11.48
341 TestNetworkPlugins/group/calico/KubeletFlags 0.41
342 TestNetworkPlugins/group/calico/NetCatPod 10.48
343 TestNetworkPlugins/group/calico/DNS 0.22
344 TestNetworkPlugins/group/calico/Localhost 0.21
345 TestNetworkPlugins/group/kindnet/DNS 0.35
346 TestNetworkPlugins/group/calico/HairPin 0.23
347 TestNetworkPlugins/group/kindnet/Localhost 0.29
348 TestNetworkPlugins/group/kindnet/HairPin 0.28
349 TestNetworkPlugins/group/custom-flannel/Start 73.98
350 TestNetworkPlugins/group/enable-default-cni/Start 89.24
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.43
353 TestNetworkPlugins/group/custom-flannel/DNS 0.26
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
357 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.37
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
361 TestNetworkPlugins/group/flannel/Start 66.27
362 TestNetworkPlugins/group/bridge/Start 88.62
363 TestNetworkPlugins/group/flannel/ControllerPod 5.04
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
365 TestNetworkPlugins/group/flannel/NetCatPod 10.35
366 TestNetworkPlugins/group/flannel/DNS 0.24
367 TestNetworkPlugins/group/flannel/Localhost 0.19
368 TestNetworkPlugins/group/flannel/HairPin 0.19
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.5
370 TestNetworkPlugins/group/bridge/NetCatPod 11.36
371 TestNetworkPlugins/group/bridge/DNS 0.19
372 TestNetworkPlugins/group/bridge/Localhost 0.18
373 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (30.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-746330 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-746330 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (30.015949903s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (30.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-746330
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-746330: exit status 85 (92.747341ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-746330 | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC |          |
	|         | -p download-only-746330        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:25:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:25:41.431202  258495 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:25:41.431436  258495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:25:41.431467  258495 out.go:309] Setting ErrFile to fd 2...
	I1107 23:25:41.431490  258495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:25:41.431780  258495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	W1107 23:25:41.431929  258495 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-253150/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-253150/.minikube/config/config.json: no such file or directory
	I1107 23:25:41.432361  258495 out.go:303] Setting JSON to true
	I1107 23:25:41.433556  258495 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7488,"bootTime":1699392054,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1107 23:25:41.433662  258495 start.go:138] virtualization:  
	I1107 23:25:41.436269  258495 out.go:97] [download-only-746330] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W1107 23:25:41.436487  258495 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 23:25:41.436617  258495 notify.go:220] Checking for updates...
	I1107 23:25:41.439544  258495 out.go:169] MINIKUBE_LOCATION=17585
	I1107 23:25:41.441291  258495 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:25:41.442993  258495 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:25:41.444777  258495 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1107 23:25:41.446484  258495 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1107 23:25:41.449547  258495 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:25:41.449787  258495 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:25:41.473925  258495 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:25:41.474040  258495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:25:41.555111  258495 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-07 23:25:41.545005356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:25:41.555213  258495 docker.go:295] overlay module found
	I1107 23:25:41.556822  258495 out.go:97] Using the docker driver based on user configuration
	I1107 23:25:41.556846  258495 start.go:298] selected driver: docker
	I1107 23:25:41.556853  258495 start.go:902] validating driver "docker" against <nil>
	I1107 23:25:41.556957  258495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:25:41.628665  258495 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-07 23:25:41.6189688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:25:41.628837  258495 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:25:41.629120  258495 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1107 23:25:41.629350  258495 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 23:25:41.631295  258495 out.go:169] Using Docker driver with root privileges
	I1107 23:25:41.632955  258495 cni.go:84] Creating CNI manager for ""
	I1107 23:25:41.632974  258495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:25:41.632987  258495 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:25:41.633002  258495 start_flags.go:323] config:
	{Name:download-only-746330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-746330 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:25:41.634663  258495 out.go:97] Starting control plane node download-only-746330 in cluster download-only-746330
	I1107 23:25:41.634684  258495 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1107 23:25:41.636537  258495 out.go:97] Pulling base image ...
	I1107 23:25:41.636562  258495 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1107 23:25:41.636728  258495 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:25:41.653981  258495 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:25:41.654200  258495 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:25:41.654298  258495 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:25:41.724159  258495 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1107 23:25:41.724195  258495 cache.go:56] Caching tarball of preloaded images
	I1107 23:25:41.724356  258495 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1107 23:25:41.726440  258495 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 23:25:41.726462  258495 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:25:41.874159  258495 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1107 23:25:49.567238  258495 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:25:55.536542  258495 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:25:55.536670  258495 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:25:56.637786  258495 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1107 23:25:56.638184  258495 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/download-only-746330/config.json ...
	I1107 23:25:56.638219  258495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/download-only-746330/config.json: {Name:mkd8a99ea8e6e847074b4ae0ba502cb77d572f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:25:56.638407  258495 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1107 23:25:56.639196  258495 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17585-253150/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-746330"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (13.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-746330 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-746330 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.914196044s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (13.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-746330
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-746330: exit status 85 (91.805491ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-746330 | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC |          |
	|         | -p download-only-746330        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-746330 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |          |
	|         | -p download-only-746330        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:26:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:26:11.535372  258573 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:26:11.535616  258573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:26:11.535645  258573 out.go:309] Setting ErrFile to fd 2...
	I1107 23:26:11.535665  258573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:26:11.535988  258573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	W1107 23:26:11.536178  258573 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-253150/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-253150/.minikube/config/config.json: no such file or directory
	I1107 23:26:11.536515  258573 out.go:303] Setting JSON to true
	I1107 23:26:11.537626  258573 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7518,"bootTime":1699392054,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1107 23:26:11.537732  258573 start.go:138] virtualization:  
	I1107 23:26:11.539980  258573 out.go:97] [download-only-746330] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:26:11.541937  258573 out.go:169] MINIKUBE_LOCATION=17585
	I1107 23:26:11.540304  258573 notify.go:220] Checking for updates...
	I1107 23:26:11.545055  258573 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:26:11.546653  258573 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:26:11.548103  258573 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1107 23:26:11.549825  258573 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1107 23:26:11.553287  258573 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:26:11.553854  258573 config.go:182] Loaded profile config "download-only-746330": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1107 23:26:11.553917  258573 start.go:810] api.Load failed for download-only-746330: filestore "download-only-746330": Docker machine "download-only-746330" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:26:11.554026  258573 driver.go:378] Setting default libvirt URI to qemu:///system
	W1107 23:26:11.554096  258573 start.go:810] api.Load failed for download-only-746330: filestore "download-only-746330": Docker machine "download-only-746330" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:26:11.578139  258573 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:26:11.578250  258573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:26:11.656538  258573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:26:11.646554365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:26:11.656640  258573 docker.go:295] overlay module found
	I1107 23:26:11.658590  258573 out.go:97] Using the docker driver based on existing profile
	I1107 23:26:11.658635  258573 start.go:298] selected driver: docker
	I1107 23:26:11.658649  258573 start.go:902] validating driver "docker" against &{Name:download-only-746330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-746330 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:26:11.658816  258573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:26:11.728696  258573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-07 23:26:11.718508193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:26:11.729143  258573 cni.go:84] Creating CNI manager for ""
	I1107 23:26:11.729165  258573 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1107 23:26:11.729185  258573 start_flags.go:323] config:
	{Name:download-only-746330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-746330 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:26:11.731156  258573 out.go:97] Starting control plane node download-only-746330 in cluster download-only-746330
	I1107 23:26:11.731187  258573 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1107 23:26:11.732987  258573 out.go:97] Pulling base image ...
	I1107 23:26:11.733014  258573 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1107 23:26:11.733096  258573 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:26:11.751014  258573 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:26:11.751152  258573 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:26:11.751176  258573 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 23:26:11.751182  258573 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 23:26:11.751189  258573 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:26:11.817851  258573 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1107 23:26:11.817889  258573 cache.go:56] Caching tarball of preloaded images
	I1107 23:26:11.818053  258573 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1107 23:26:11.820307  258573 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1107 23:26:11.820330  258573 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1107 23:26:11.964343  258573 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:bef3312f8cc1e9e2e6a78bd8b3d269c4 -> /home/jenkins/minikube-integration/17585-253150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-746330"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-746330
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-022767 --alsologtostderr --binary-mirror http://127.0.0.1:36359 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-022767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-022767
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-257591
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-257591: exit status 85 (87.06038ms)

                                                
                                                
-- stdout --
	* Profile "addons-257591" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257591"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-257591
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-257591: exit status 85 (101.447125ms)

                                                
                                                
-- stdout --
	* Profile "addons-257591" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257591"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (142.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-257591 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-257591 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.301953408s)
--- PASS: TestAddons/Setup (142.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 48.691023ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zhkpz" [ce9558ec-5e12-4932-9d39-c4f87f0d8ed1] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.021882493s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7psnh" [efe1bce4-fa1e-4767-9f02-ea2ec2980490] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014064247s
addons_test.go:339: (dbg) Run:  kubectl --context addons-257591 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-257591 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-257591 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.475062854s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 ip
2023/11/07 23:29:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qgcpz" [5ddeaee7-98c7-49c9-a4c9-03a73b2343cf] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013762719s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-257591
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-257591: (5.862027999s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 44.854354ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-cg26p" [45c4bdce-6aef-4f79-907d-486e408dab7a] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.035421001s
addons_test.go:414: (dbg) Run:  kubectl --context addons-257591 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 32.794801ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-257591 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-257591 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9f76dad2-e50e-4dbe-9d74-ea0ed36dcb0d] Pending
helpers_test.go:344: "task-pv-pod" [9f76dad2-e50e-4dbe-9d74-ea0ed36dcb0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9f76dad2-e50e-4dbe-9d74-ea0ed36dcb0d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.019253373s
addons_test.go:583: (dbg) Run:  kubectl --context addons-257591 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-257591 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-257591 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-257591 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-257591 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-257591 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-257591 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [138dd85a-075e-494e-a61f-1bd2087f8e9f] Pending
helpers_test.go:344: "task-pv-pod-restore" [138dd85a-075e-494e-a61f-1bd2087f8e9f] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.021087395s
addons_test.go:625: (dbg) Run:  kubectl --context addons-257591 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-257591 delete pod task-pv-pod-restore: (1.022717879s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-257591 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-257591 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-257591 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80683169s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-257591 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-257591 --alsologtostderr -v=1: (1.231441169s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-d8prc" [31eb3ab8-dd37-4315-87ff-36e69a009ad1] Pending
helpers_test.go:344: "headlamp-94b766c-d8prc" [31eb3ab8-dd37-4315-87ff-36e69a009ad1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-d8prc" [31eb3ab8-dd37-4315-87ff-36e69a009ad1] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.058403436s
--- PASS: TestAddons/parallel/Headlamp (10.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-fhtmc" [0ea5fa88-112e-477b-9919-bdc6904787f6] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.041768675s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-257591
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-257591 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-257591 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [df61cb73-423f-4ab7-8896-04aa3d479ed5] Pending
helpers_test.go:344: "test-local-path" [df61cb73-423f-4ab7-8896-04aa3d479ed5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [df61cb73-423f-4ab7-8896-04aa3d479ed5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [df61cb73-423f-4ab7-8896-04aa3d479ed5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.009849452s
addons_test.go:890: (dbg) Run:  kubectl --context addons-257591 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 ssh "cat /opt/local-path-provisioner/pvc-9e3ec8d5-6b02-4665-bda0-da43e0c8626d_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-257591 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-257591 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-257591 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-257591 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.164461436s)
--- PASS: TestAddons/parallel/LocalPath (53.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9gvwv" [0b930239-b130-4c92-8be6-38b48109e2e7] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01264016s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-257591
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-257591 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-257591 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-257591
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-257591: (12.094730335s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-257591
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-257591
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-257591
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (36.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-433587 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-433587 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.939131573s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-433587 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-433587 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-433587 -- "sudo cat /etc/kubernetes/admin.conf"
E1108 00:06:52.779644  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-433587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-433587
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-433587: (2.05987901s)
--- PASS: TestCertOptions (36.71s)

                                                
                                    
x
+
TestCertExpiration (230.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-935785 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1108 00:05:42.548004  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-935785 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.709134696s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-935785 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-935785 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.393090599s)
helpers_test.go:175: Cleaning up "cert-expiration-935785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-935785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-935785: (3.668120329s)
--- PASS: TestCertExpiration (230.77s)

                                                
                                    
x
+
TestForceSystemdFlag (43.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-450366 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-450366 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.887878089s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-450366 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-450366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-450366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-450366: (4.449724188s)
--- PASS: TestForceSystemdFlag (43.76s)

                                                
                                    
x
+
TestForceSystemdEnv (43.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-328246 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-328246 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.844103904s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-328246 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-328246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-328246
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-328246: (2.300045966s)
--- PASS: TestForceSystemdEnv (43.56s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.45s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-879786 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-879786 --driver=docker  --container-runtime=containerd: (30.816925089s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-879786"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-879786": (1.550554075s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-61EUEBKhWORE/agent.275573" SSH_AGENT_PID="275574" DOCKER_HOST=ssh://docker@127.0.0.1:33089 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-61EUEBKhWORE/agent.275573" SSH_AGENT_PID="275574" DOCKER_HOST=ssh://docker@127.0.0.1:33089 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-61EUEBKhWORE/agent.275573" SSH_AGENT_PID="275574" DOCKER_HOST=ssh://docker@127.0.0.1:33089 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.758774027s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-61EUEBKhWORE/agent.275573" SSH_AGENT_PID="275574" DOCKER_HOST=ssh://docker@127.0.0.1:33089 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-879786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-879786
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-879786: (2.060321964s)
--- PASS: TestDockerEnvContainerd (47.45s)

                                                
                                    
x
+
TestErrorSpam/setup (30.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-914943 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-914943 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-914943 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-914943 --driver=docker  --container-runtime=containerd: (30.092299006s)
--- PASS: TestErrorSpam/setup (30.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 stop: (1.265676926s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914943 --log_dir /tmp/nospam-914943 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17585-253150/.minikube/files/etc/test/nested/copy/258490/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-662509 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1107 23:33:49.734597  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:49.740221  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:49.750477  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:49.770790  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:49.811097  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:49.891446  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:50.051939  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:50.372579  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:51.012835  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:52.293085  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:54.853355  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:33:59.974442  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:34:10.215266  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-662509 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m22.867105284s)
--- PASS: TestFunctional/serial/StartWithProxy (82.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-662509 --alsologtostderr -v=8
E1107 23:34:30.696140  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-662509 --alsologtostderr -v=8: (6.188182814s)
functional_test.go:659: soft start took 6.188698366s for "functional-662509" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-662509 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 cache add registry.k8s.io/pause:3.1: (1.751390861s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 cache add registry.k8s.io/pause:3.3: (1.697937757s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 cache add registry.k8s.io/pause:latest: (1.551376674s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-662509 /tmp/TestFunctionalserialCacheCmdcacheadd_local3790578126/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cache add minikube-local-cache-test:functional-662509
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 cache add minikube-local-cache-test:functional-662509: (1.286005571s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cache delete minikube-local-cache-test:functional-662509
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-662509
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (350.834691ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 cache reload: (1.462681142s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.29s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 kubectl -- --context functional-662509 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-662509 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-662509 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 23:35:11.657029  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-662509 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.539211747s)
functional_test.go:757: restart took 43.539360841s for "functional-662509" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-662509 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 logs: (1.981399827s)
--- PASS: TestFunctional/serial/LogsCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 logs --file /tmp/TestFunctionalserialLogsFileCmd464698359/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 logs --file /tmp/TestFunctionalserialLogsFileCmd464698359/001/logs.txt: (1.939067415s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-662509 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-662509
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-662509: exit status 115 (614.957812ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30390 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-662509 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 config get cpus: exit status 14 (117.805429ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 config get cpus: exit status 14 (146.168334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-662509 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-662509 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 290889: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-662509 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-662509 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (235.126561ms)

                                                
                                                
-- stdout --
	* [functional-662509] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:36:25.337502  290214 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:36:25.337725  290214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:36:25.337753  290214 out.go:309] Setting ErrFile to fd 2...
	I1107 23:36:25.337774  290214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:36:25.338063  290214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:36:25.338453  290214 out.go:303] Setting JSON to false
	I1107 23:36:25.339501  290214 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8132,"bootTime":1699392054,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1107 23:36:25.339599  290214 start.go:138] virtualization:  
	I1107 23:36:25.342135  290214 out.go:177] * [functional-662509] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1107 23:36:25.344514  290214 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:36:25.346591  290214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:36:25.344675  290214 notify.go:220] Checking for updates...
	I1107 23:36:25.348537  290214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:36:25.350091  290214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1107 23:36:25.351702  290214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:36:25.353435  290214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:36:25.355621  290214 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:36:25.356194  290214 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:36:25.380460  290214 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:36:25.380583  290214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:36:25.491159  290214 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-07 23:36:25.480309368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:36:25.491275  290214 docker.go:295] overlay module found
	I1107 23:36:25.494082  290214 out.go:177] * Using the docker driver based on existing profile
	I1107 23:36:25.495665  290214 start.go:298] selected driver: docker
	I1107 23:36:25.495688  290214 start.go:902] validating driver "docker" against &{Name:functional-662509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-662509 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:36:25.495793  290214 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:36:25.497962  290214 out.go:177] 
	W1107 23:36:25.499455  290214 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 23:36:25.500979  290214 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-662509 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-662509 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-662509 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (348.169008ms)

                                                
                                                
-- stdout --
	* [functional-662509] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:36:26.036413  290378 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:36:26.036725  290378 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:36:26.036739  290378 out.go:309] Setting ErrFile to fd 2...
	I1107 23:36:26.036746  290378 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:36:26.037417  290378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:36:26.038194  290378 out.go:303] Setting JSON to false
	I1107 23:36:26.039794  290378 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8132,"bootTime":1699392054,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1107 23:36:26.039932  290378 start.go:138] virtualization:  
	I1107 23:36:26.042458  290378 out.go:177] * [functional-662509] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1107 23:36:26.044350  290378 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:36:26.046294  290378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:36:26.044464  290378 notify.go:220] Checking for updates...
	I1107 23:36:26.050218  290378 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1107 23:36:26.051925  290378 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1107 23:36:26.053546  290378 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1107 23:36:26.055136  290378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:36:26.057526  290378 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:36:26.058190  290378 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:36:26.140897  290378 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:36:26.141015  290378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:36:26.261560  290378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-07 23:36:26.249876561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:36:26.261663  290378 docker.go:295] overlay module found
	I1107 23:36:26.263627  290378 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 23:36:26.265214  290378 start.go:298] selected driver: docker
	I1107 23:36:26.265255  290378 start.go:902] validating driver "docker" against &{Name:functional-662509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-662509 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:36:26.265353  290378 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:36:26.267761  290378 out.go:177] 
	W1107 23:36:26.269482  290378 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 23:36:26.271022  290378 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-662509 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-662509 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-sf2j8" [672176c3-adb3-42dd-866c-db4681026b47] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-sf2j8" [672176c3-adb3-42dd-866c-db4681026b47] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.017113273s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31894
functional_test.go:1674: http://192.168.49.2:31894: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-sf2j8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31894
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fad2151e-4f72-4f2c-805b-b02cbefc57c6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012059068s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-662509 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-662509 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-662509 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-662509 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b40af640-0488-4b15-a53c-5d88c114b183] Pending
helpers_test.go:344: "sp-pod" [b40af640-0488-4b15-a53c-5d88c114b183] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b40af640-0488-4b15-a53c-5d88c114b183] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.024354946s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-662509 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-662509 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-662509 delete -f testdata/storage-provisioner/pod.yaml: (1.19718275s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-662509 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f0ae172-3487-473f-9f70-de703f276497] Pending
helpers_test.go:344: "sp-pod" [3f0ae172-3487-473f-9f70-de703f276497] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f0ae172-3487-473f-9f70-de703f276497] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.023086841s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-662509 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh -n functional-662509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 cp functional-662509:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4234406771/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh -n functional-662509 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/258490/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /etc/test/nested/copy/258490/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/258490.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /etc/ssl/certs/258490.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/258490.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /usr/share/ca-certificates/258490.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2584902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /etc/ssl/certs/2584902.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2584902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /usr/share/ca-certificates/2584902.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-662509 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh "sudo systemctl is-active docker": exit status 1 (465.739842ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh "sudo systemctl is-active crio": exit status 1 (378.837443ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-662509 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-662509
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-662509 image ls --format short --alsologtostderr:
I1107 23:36:31.243526  291315 out.go:296] Setting OutFile to fd 1 ...
I1107 23:36:31.243709  291315 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:31.243750  291315 out.go:309] Setting ErrFile to fd 2...
I1107 23:36:31.243772  291315 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:31.244059  291315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
I1107 23:36:31.244752  291315 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:31.244949  291315 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:31.245686  291315 cli_runner.go:164] Run: docker container inspect functional-662509 --format={{.State.Status}}
I1107 23:36:31.269068  291315 ssh_runner.go:195] Run: systemctl --version
I1107 23:36:31.269122  291315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662509
I1107 23:36:31.288430  291315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/functional-662509/id_rsa Username:docker}
I1107 23:36:31.383305  291315 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-662509 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.3            | sha256:827643 | 30.3MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.28.3            | sha256:a5dd5c | 22MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| localhost/my-image                          | functional-662509  | sha256:fa3363 | 831kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3            | sha256:537e9a | 31.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.3            | sha256:42a4e7 | 17.1MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-662509  | sha256:023a05 | 1.01kB |
| docker.io/library/nginx                     | latest             | sha256:81be38 | 67.2MB |
| docker.io/library/nginx                     | alpine             | sha256:aae348 | 19.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-662509 image ls --format table --alsologtostderr:
I1107 23:36:35.765906  291667 out.go:296] Setting OutFile to fd 1 ...
I1107 23:36:35.766062  291667 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:35.766070  291667 out.go:309] Setting ErrFile to fd 2...
I1107 23:36:35.766076  291667 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:35.766350  291667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
I1107 23:36:35.766995  291667 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:35.767134  291667 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:35.767618  291667 cli_runner.go:164] Run: docker container inspect functional-662509 --format={{.State.Status}}
I1107 23:36:35.787970  291667 ssh_runner.go:195] Run: systemctl --version
I1107 23:36:35.788038  291667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662509
I1107 23:36:35.814890  291667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/functional-662509/id_rsa Username:docker}
I1107 23:36:35.907037  291667 ssh_runner.go:195] Run: sudo crictl images --output json
2023/11/07 23:36:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-662509 image ls --format json --alsologtostderr:
[{"id":"sha256:81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6"],"repoTags":["docker.io/library/nginx:latest"],"size":"67241456"},{"id":"sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"31557550"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707"],"repoTags":["registry.
k8s.io/kube-controller-manager:v1.28.3"],"size":"30344361"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb9925006
1dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:fa3363a4d65f9a3c509107fa2e73c7b6cd23776e09b97bc27dbcd10bae7c7c70","repoDigests":[],"repoTags":["localhost/my-image:functional-662509"],"size":"830632"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb4
0f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19561536"},{"id":"sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"17063462"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:023a05323fd05be1583ed931ac02a85334d53a8239a960599299967308ecc81e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-662509"],"size":"1008"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b74
6dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"21981421"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-662509 image ls --format json --alsologtostderr:
I1107 23:36:35.483272  291640 out.go:296] Setting OutFile to fd 1 ...
I1107 23:36:35.483550  291640 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:35.483579  291640 out.go:309] Setting ErrFile to fd 2...
I1107 23:36:35.483598  291640 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:35.483881  291640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
I1107 23:36:35.484581  291640 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:35.484767  291640 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:35.485378  291640 cli_runner.go:164] Run: docker container inspect functional-662509 --format={{.State.Status}}
I1107 23:36:35.508546  291640 ssh_runner.go:195] Run: systemctl --version
I1107 23:36:35.508606  291640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662509
I1107 23:36:35.531099  291640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/functional-662509/id_rsa Username:docker}
I1107 23:36:35.626993  291640 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-662509 image ls --format yaml --alsologtostderr:
- id: sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "17063462"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "31557550"
- id: sha256:81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
repoTags:
- docker.io/library/nginx:latest
size: "67241456"
- id: sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "21981421"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "30344361"
- id: sha256:023a05323fd05be1583ed931ac02a85334d53a8239a960599299967308ecc81e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-662509
size: "1008"
- id: sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "19561536"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-662509 image ls --format yaml --alsologtostderr:
I1107 23:36:31.542610  291342 out.go:296] Setting OutFile to fd 1 ...
I1107 23:36:31.542805  291342 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:31.542825  291342 out.go:309] Setting ErrFile to fd 2...
I1107 23:36:31.542843  291342 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:31.543242  291342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
I1107 23:36:31.543919  291342 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:31.544097  291342 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:31.544691  291342 cli_runner.go:164] Run: docker container inspect functional-662509 --format={{.State.Status}}
I1107 23:36:31.568936  291342 ssh_runner.go:195] Run: systemctl --version
I1107 23:36:31.569001  291342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662509
I1107 23:36:31.591662  291342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/functional-662509/id_rsa Username:docker}
I1107 23:36:31.691385  291342 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh pgrep buildkitd: exit status 1 (405.973627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image build -t localhost/my-image:functional-662509 testdata/build --alsologtostderr
E1107 23:36:33.577741  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 image build -t localhost/my-image:functional-662509 testdata/build --alsologtostderr: (2.974239351s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-662509 image build -t localhost/my-image:functional-662509 testdata/build --alsologtostderr:
I1107 23:36:32.262953  291425 out.go:296] Setting OutFile to fd 1 ...
I1107 23:36:32.263458  291425 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:32.263490  291425 out.go:309] Setting ErrFile to fd 2...
I1107 23:36:32.263510  291425 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:36:32.263819  291425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
I1107 23:36:32.264561  291425 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:32.265267  291425 config.go:182] Loaded profile config "functional-662509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1107 23:36:32.265875  291425 cli_runner.go:164] Run: docker container inspect functional-662509 --format={{.State.Status}}
I1107 23:36:32.286172  291425 ssh_runner.go:195] Run: systemctl --version
I1107 23:36:32.286249  291425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662509
I1107 23:36:32.306640  291425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/functional-662509/id_rsa Username:docker}
I1107 23:36:32.403332  291425 build_images.go:151] Building image from path: /tmp/build.1821029874.tar
I1107 23:36:32.403406  291425 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1107 23:36:32.415383  291425 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1821029874.tar
I1107 23:36:32.420276  291425 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1821029874.tar: stat -c "%s %y" /var/lib/minikube/build/build.1821029874.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1821029874.tar': No such file or directory
I1107 23:36:32.420303  291425 ssh_runner.go:362] scp /tmp/build.1821029874.tar --> /var/lib/minikube/build/build.1821029874.tar (3072 bytes)
I1107 23:36:32.452785  291425 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1821029874
I1107 23:36:32.472381  291425 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1821029874 -xf /var/lib/minikube/build/build.1821029874.tar
I1107 23:36:32.485700  291425 containerd.go:378] Building image: /var/lib/minikube/build/build.1821029874
I1107 23:36:32.485838  291425 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1821029874 --local dockerfile=/var/lib/minikube/build/build.1821029874 --output type=image,name=localhost/my-image:functional-662509
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.2s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:56b09dfdb823bb7c38354f815e896f9ab26dc2e1c614a1eb556117d97f88cafe 0.0s done
#8 exporting config sha256:fa3363a4d65f9a3c509107fa2e73c7b6cd23776e09b97bc27dbcd10bae7c7c70 0.0s done
#8 naming to localhost/my-image:functional-662509 done
#8 DONE 0.1s
I1107 23:36:35.108493  291425 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1821029874 --local dockerfile=/var/lib/minikube/build/build.1821029874 --output type=image,name=localhost/my-image:functional-662509: (2.622609222s)
I1107 23:36:35.108632  291425 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1821029874
I1107 23:36:35.122902  291425 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1821029874.tar
I1107 23:36:35.137298  291425 build_images.go:207] Built localhost/my-image:functional-662509 from /tmp/build.1821029874.tar
I1107 23:36:35.137380  291425 build_images.go:123] succeeded building to: functional-662509
I1107 23:36:35.137398  291425 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.363306119s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-662509
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-662509 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-662509 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-zwp4w" [ddaffd9f-a810-4878-9b37-6e7ec7a03659] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-zwp4w" [ddaffd9f-a810-4878-9b37-6e7ec7a03659] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.043823798s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 service list -o json
functional_test.go:1493: Took "469.814664ms" to run "out/minikube-linux-arm64 -p functional-662509 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31121
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31121
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image rm gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-662509 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-662509 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-662509 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-662509 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 288008: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-662509 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-662509 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3517f52d-f732-4d00-bf53-4c9eadeeb9ad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3517f52d-f732-4d00-bf53-4c9eadeeb9ad] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.021776512s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-662509
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 image save --daemon gcr.io/google-containers/addon-resizer:functional-662509 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-662509
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-662509 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.133.32 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-662509 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "371.472147ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "104.149611ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "444.717012ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "77.503347ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdany-port1962315843/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699400177290677079" to /tmp/TestFunctionalparallelMountCmdany-port1962315843/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699400177290677079" to /tmp/TestFunctionalparallelMountCmdany-port1962315843/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699400177290677079" to /tmp/TestFunctionalparallelMountCmdany-port1962315843/001/test-1699400177290677079
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (544.998697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 23:36 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 23:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 23:36 test-1699400177290677079
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh cat /mount-9p/test-1699400177290677079
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-662509 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [51ec2608-9dec-4f75-a9fd-53031e0a9444] Pending
helpers_test.go:344: "busybox-mount" [51ec2608-9dec-4f75-a9fd-53031e0a9444] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [51ec2608-9dec-4f75-a9fd-53031e0a9444] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [51ec2608-9dec-4f75-a9fd-53031e0a9444] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.017029117s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-662509 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdany-port1962315843/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdspecific-port1044898714/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (416.286312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdspecific-port1044898714/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-662509 ssh "sudo umount -f /mount-9p": exit status 1 (385.800577ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-662509 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdspecific-port1044898714/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T" /mount1: (1.479786039s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-662509 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-662509 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-662509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1621896042/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-662509
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-662509
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-662509
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (101.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-537363 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-537363 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m41.748215414s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (101.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons enable ingress --alsologtostderr -v=5: (10.021510964s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-537363 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-004556 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1107 23:40:42.547291  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:42.552780  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:42.563060  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:42.583331  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:42.623576  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:42.704030  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:42.864508  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:43.185109  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:43.826047  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:45.106344  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:40:47.667283  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-004556 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m22.926048882s)
--- PASS: TestJSONOutput/start/Command (82.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-004556 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-004556 --output=json --user=testUser
E1107 23:40:52.788247  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-004556 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-004556 --output=json --user=testUser: (5.873412386s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-407622 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-407622 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.923396ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a92b4f8-b50c-454d-8f3c-95228e5762cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-407622] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9928c5fa-e175-40f0-b712-30a2cf13f79e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"a2b07292-a96e-47d3-8396-77457ad9d60f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ff8d4b0d-a215-465c-880c-eecd80c4fb32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig"}}
	{"specversion":"1.0","id":"b17a1bf5-c902-4230-a616-45f9bc8103cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube"}}
	{"specversion":"1.0","id":"a8b738b9-15dd-47c7-941b-a3c143d9b33f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"97f0a85e-1234-4702-97b8-70836bf4c116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fae41563-f227-4d88-990f-2dcabf078fda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-407622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-407622
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-164094 --network=
E1107 23:41:23.509143  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-164094 --network=: (41.766613605s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-164094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-164094
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-164094: (2.147603808s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.94s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-564904 --network=bridge
E1107 23:42:04.469359  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-564904 --network=bridge: (35.369781628s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-564904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-564904
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-564904: (1.961010898s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.35s)

                                                
                                    
x
+
TestKicExistingNetwork (35.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-488468 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-488468 --network=existing-network: (33.52186038s)
helpers_test.go:175: Cleaning up "existing-network-488468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-488468
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-488468: (2.070853163s)
--- PASS: TestKicExistingNetwork (35.76s)

                                                
                                    
x
+
TestKicCustomSubnet (34.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-566814 --subnet=192.168.60.0/24
E1107 23:43:26.390656  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:43:32.522527  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:32.527864  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:32.538132  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:32.558601  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:32.598845  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:32.679104  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:32.839429  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:33.159805  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-566814 --subnet=192.168.60.0/24: (32.109671076s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-566814 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-566814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-566814
E1107 23:43:33.799976  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:35.080214  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-566814: (2.105371342s)
--- PASS: TestKicCustomSubnet (34.24s)

                                                
                                    
x
+
TestKicStaticIP (34.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-640233 --static-ip=192.168.200.200
E1107 23:43:37.640433  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:42.760970  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:43:49.734573  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:43:53.001160  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-640233 --static-ip=192.168.200.200: (32.229075035s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-640233 ip
helpers_test.go:175: Cleaning up "static-ip-640233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-640233
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-640233: (2.190059049s)
--- PASS: TestKicStaticIP (34.61s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (71.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-601566 --driver=docker  --container-runtime=containerd
E1107 23:44:13.481358  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-601566 --driver=docker  --container-runtime=containerd: (31.301995128s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-604414 --driver=docker  --container-runtime=containerd
E1107 23:44:54.441550  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-604414 --driver=docker  --container-runtime=containerd: (34.464146844s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-601566
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-604414
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-604414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-604414
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-604414: (1.990125658s)
helpers_test.go:175: Cleaning up "first-601566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-601566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-601566: (2.02376283s)
--- PASS: TestMinikubeProfile (71.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-913443 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-913443 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.272997467s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-913443 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-915125 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-915125 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.098295086s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-915125 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-913443 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-913443 --alsologtostderr -v=5: (1.704978722s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-915125 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-915125
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-915125: (1.240352529s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-915125
E1107 23:45:42.547139  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-915125: (6.741014351s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-915125 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-558775 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1107 23:46:10.231622  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1107 23:46:16.365356  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-558775 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m13.005421515s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-558775 -- rollout status deployment/busybox: (2.268915382s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-fksf4 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-vpjjh -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-fksf4 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-vpjjh -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-fksf4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-vpjjh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-fksf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-fksf4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-vpjjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-558775 -- exec busybox-5bc68d56bd-vpjjh -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.24s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-558775 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-558775 -v 3 --alsologtostderr: (17.334103112s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp testdata/cp-test.txt multinode-558775:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3609068275/001/cp-test_multinode-558775.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775:/home/docker/cp-test.txt multinode-558775-m02:/home/docker/cp-test_multinode-558775_multinode-558775-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m02 "sudo cat /home/docker/cp-test_multinode-558775_multinode-558775-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775:/home/docker/cp-test.txt multinode-558775-m03:/home/docker/cp-test_multinode-558775_multinode-558775-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m03 "sudo cat /home/docker/cp-test_multinode-558775_multinode-558775-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp testdata/cp-test.txt multinode-558775-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3609068275/001/cp-test_multinode-558775-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775-m02:/home/docker/cp-test.txt multinode-558775:/home/docker/cp-test_multinode-558775-m02_multinode-558775.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775 "sudo cat /home/docker/cp-test_multinode-558775-m02_multinode-558775.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775-m02:/home/docker/cp-test.txt multinode-558775-m03:/home/docker/cp-test_multinode-558775-m02_multinode-558775-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m03 "sudo cat /home/docker/cp-test_multinode-558775-m02_multinode-558775-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp testdata/cp-test.txt multinode-558775-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3609068275/001/cp-test_multinode-558775-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775-m03:/home/docker/cp-test.txt multinode-558775:/home/docker/cp-test_multinode-558775-m03_multinode-558775.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775 "sudo cat /home/docker/cp-test_multinode-558775-m03_multinode-558775.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 cp multinode-558775-m03:/home/docker/cp-test.txt multinode-558775-m02:/home/docker/cp-test_multinode-558775-m03_multinode-558775-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 ssh -n multinode-558775-m02 "sudo cat /home/docker/cp-test_multinode-558775-m03_multinode-558775-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-558775 node stop m03: (1.273858931s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-558775 status: exit status 7 (586.239884ms)

                                                
                                                
-- stdout --
	multinode-558775
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-558775-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-558775-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr: exit status 7 (595.39517ms)

                                                
                                                
-- stdout --
	multinode-558775
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-558775-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-558775-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:47:42.988825  339138 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:47:42.989024  339138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:47:42.989035  339138 out.go:309] Setting ErrFile to fd 2...
	I1107 23:47:42.989042  339138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:47:42.989317  339138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:47:42.989487  339138 out.go:303] Setting JSON to false
	I1107 23:47:42.989664  339138 mustload.go:65] Loading cluster: multinode-558775
	I1107 23:47:42.989723  339138 notify.go:220] Checking for updates...
	I1107 23:47:42.990121  339138 config.go:182] Loaded profile config "multinode-558775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:47:42.990138  339138 status.go:255] checking status of multinode-558775 ...
	I1107 23:47:42.991483  339138 cli_runner.go:164] Run: docker container inspect multinode-558775 --format={{.State.Status}}
	I1107 23:47:43.022836  339138 status.go:330] multinode-558775 host status = "Running" (err=<nil>)
	I1107 23:47:43.022857  339138 host.go:66] Checking if "multinode-558775" exists ...
	I1107 23:47:43.023180  339138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-558775
	I1107 23:47:43.042775  339138 host.go:66] Checking if "multinode-558775" exists ...
	I1107 23:47:43.043116  339138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:47:43.043179  339138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-558775
	I1107 23:47:43.069759  339138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/multinode-558775/id_rsa Username:docker}
	I1107 23:47:43.163812  339138 ssh_runner.go:195] Run: systemctl --version
	I1107 23:47:43.169913  339138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:47:43.183938  339138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:47:43.272208  339138 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-07 23:47:43.261462092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1107 23:47:43.272813  339138 kubeconfig.go:92] found "multinode-558775" server: "https://192.168.58.2:8443"
	I1107 23:47:43.272839  339138 api_server.go:166] Checking apiserver status ...
	I1107 23:47:43.272879  339138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:47:43.286966  339138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	I1107 23:47:43.299845  339138 api_server.go:182] apiserver freezer: "12:freezer:/docker/442e82d9159cbdb41a8d4e896c37f3b32da061cac4a543596d4a9bf8da6eafed/kubepods/burstable/pod931988d647861ab4e204dc5658a7474e/feb87009bd8715d819d79ad42a237b4f86b7a2cc85d2b1bcd4d728e372c3c193"
	I1107 23:47:43.299927  339138 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/442e82d9159cbdb41a8d4e896c37f3b32da061cac4a543596d4a9bf8da6eafed/kubepods/burstable/pod931988d647861ab4e204dc5658a7474e/feb87009bd8715d819d79ad42a237b4f86b7a2cc85d2b1bcd4d728e372c3c193/freezer.state
	I1107 23:47:43.311496  339138 api_server.go:204] freezer state: "THAWED"
	I1107 23:47:43.311526  339138 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 23:47:43.320592  339138 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 23:47:43.320628  339138 status.go:421] multinode-558775 apiserver status = Running (err=<nil>)
	I1107 23:47:43.320641  339138 status.go:257] multinode-558775 status: &{Name:multinode-558775 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:47:43.320660  339138 status.go:255] checking status of multinode-558775-m02 ...
	I1107 23:47:43.320984  339138 cli_runner.go:164] Run: docker container inspect multinode-558775-m02 --format={{.State.Status}}
	I1107 23:47:43.339821  339138 status.go:330] multinode-558775-m02 host status = "Running" (err=<nil>)
	I1107 23:47:43.339848  339138 host.go:66] Checking if "multinode-558775-m02" exists ...
	I1107 23:47:43.340159  339138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-558775-m02
	I1107 23:47:43.358969  339138 host.go:66] Checking if "multinode-558775-m02" exists ...
	I1107 23:47:43.359308  339138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:47:43.359361  339138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-558775-m02
	I1107 23:47:43.382112  339138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/17585-253150/.minikube/machines/multinode-558775-m02/id_rsa Username:docker}
	I1107 23:47:43.472375  339138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:47:43.487332  339138 status.go:257] multinode-558775-m02 status: &{Name:multinode-558775-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:47:43.487369  339138 status.go:255] checking status of multinode-558775-m03 ...
	I1107 23:47:43.487696  339138 cli_runner.go:164] Run: docker container inspect multinode-558775-m03 --format={{.State.Status}}
	I1107 23:47:43.507419  339138 status.go:330] multinode-558775-m03 host status = "Stopped" (err=<nil>)
	I1107 23:47:43.507442  339138 status.go:343] host is not running, skipping remaining checks
	I1107 23:47:43.507450  339138 status.go:257] multinode-558775-m03 status: &{Name:multinode-558775-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-558775 node start m03 --alsologtostderr: (11.371680647s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-558775
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-558775
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-558775: (25.13248021s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-558775 --wait=true -v=8 --alsologtostderr
E1107 23:48:32.523125  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1107 23:48:49.734546  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1107 23:49:00.206594  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-558775 --wait=true -v=8 --alsologtostderr: (1m37.230758477s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-558775
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-558775 node delete m03: (4.549908771s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 stop
E1107 23:50:12.778986  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-558775 stop: (24.059006076s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-558775 status: exit status 7 (111.507981ms)

                                                
                                                
-- stdout --
	multinode-558775
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-558775-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr: exit status 7 (120.151486ms)

                                                
                                                
-- stdout --
	multinode-558775
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-558775-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:50:27.932825  347687 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:50:27.933024  347687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:50:27.933052  347687 out.go:309] Setting ErrFile to fd 2...
	I1107 23:50:27.933073  347687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:50:27.933422  347687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1107 23:50:27.933659  347687 out.go:303] Setting JSON to false
	I1107 23:50:27.933785  347687 mustload.go:65] Loading cluster: multinode-558775
	I1107 23:50:27.933861  347687 notify.go:220] Checking for updates...
	I1107 23:50:27.934302  347687 config.go:182] Loaded profile config "multinode-558775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1107 23:50:27.934340  347687 status.go:255] checking status of multinode-558775 ...
	I1107 23:50:27.936169  347687 cli_runner.go:164] Run: docker container inspect multinode-558775 --format={{.State.Status}}
	I1107 23:50:27.955754  347687 status.go:330] multinode-558775 host status = "Stopped" (err=<nil>)
	I1107 23:50:27.955777  347687 status.go:343] host is not running, skipping remaining checks
	I1107 23:50:27.955784  347687 status.go:257] multinode-558775 status: &{Name:multinode-558775 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:50:27.955822  347687 status.go:255] checking status of multinode-558775-m02 ...
	I1107 23:50:27.956124  347687 cli_runner.go:164] Run: docker container inspect multinode-558775-m02 --format={{.State.Status}}
	I1107 23:50:27.975726  347687 status.go:330] multinode-558775-m02 host status = "Stopped" (err=<nil>)
	I1107 23:50:27.975749  347687 status.go:343] host is not running, skipping remaining checks
	I1107 23:50:27.975756  347687 status.go:257] multinode-558775-m02 status: &{Name:multinode-558775-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-558775 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1107 23:50:42.547809  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-558775 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.034432225s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-558775 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-558775
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-558775-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-558775-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.117774ms)

                                                
                                                
-- stdout --
	* [multinode-558775-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-558775-m02' is duplicated with machine name 'multinode-558775-m02' in profile 'multinode-558775'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-558775-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-558775-m03 --driver=docker  --container-runtime=containerd: (29.585976924s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-558775
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-558775: exit status 80 (420.91634ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-558775
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-558775-m03 already exists in multinode-558775-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-558775-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-558775-m03: (2.034818309s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.21s)

                                                
                                    
x
+
TestPreload (173.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-689860 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1107 23:53:32.523309  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-689860 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m20.111730601s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-689860 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-689860 image pull gcr.io/k8s-minikube/busybox: (1.577330668s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-689860
E1107 23:53:49.734678  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-689860: (12.019847776s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-689860 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-689860 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m17.361076871s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-689860 image list
helpers_test.go:175: Cleaning up "test-preload-689860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-689860
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-689860: (2.382590529s)
--- PASS: TestPreload (173.72s)

                                                
                                    
x
+
TestScheduledStopUnix (109.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-594927 --memory=2048 --driver=docker  --container-runtime=containerd
E1107 23:55:42.547564  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-594927 --memory=2048 --driver=docker  --container-runtime=containerd: (32.812763534s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594927 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-594927 -n scheduled-stop-594927
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594927 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594927 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-594927 -n scheduled-stop-594927
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-594927
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594927 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-594927
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-594927: exit status 7 (94.101973ms)

                                                
                                                
-- stdout --
	scheduled-stop-594927
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-594927 -n scheduled-stop-594927
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-594927 -n scheduled-stop-594927: exit status 7 (94.692608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-594927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-594927
E1107 23:57:05.591878  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-594927: (5.255859041s)
--- PASS: TestScheduledStopUnix (109.92s)

                                                
                                    
x
+
TestInsufficientStorage (11.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-178870 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-178870 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.524675267s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6c0d2235-6c18-4347-9ebf-52a3deff24e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-178870] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e9cdb60-388c-4bdd-aeca-73ee93b52238","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"c07561a8-4658-4036-8886-f5735703ff09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"563bf37f-0bb7-47c9-a5c7-df1c9ba4fb9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig"}}
	{"specversion":"1.0","id":"27ae2be4-a539-40da-9873-6a928905a5c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube"}}
	{"specversion":"1.0","id":"45c203bc-5b36-4742-9b1a-c82948741a81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"69224e3d-ece8-4770-b8d5-fae6ded5d278","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7bc697c5-feaa-4f63-9d3c-418703ad865a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a8ea4af7-ad54-4ece-ba1a-269944744b56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7bfe6338-c09a-4cd5-a9d1-e2e54b26b0b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c53d1cb3-1950-42fe-8c91-81f484d56982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0c4d2617-503d-4b72-b745-ee77801a815d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-178870 in cluster insufficient-storage-178870","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5f0d6dc-ad57-4e49-8a8b-fec157219114","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a2d854b-3882-43da-9a03-036075623ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cdbde69-b270-4845-8a29-375f2e0a4802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-178870 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-178870 --output=json --layout=cluster: exit status 7 (333.486623ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-178870","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-178870","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:57:15.453644  365014 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-178870" does not appear in /home/jenkins/minikube-integration/17585-253150/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-178870 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-178870 --output=json --layout=cluster: exit status 7 (341.973792ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-178870","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-178870","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:57:15.797416  365066 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-178870" does not appear in /home/jenkins/minikube-integration/17585-253150/kubeconfig
	E1107 23:57:15.809768  365066 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/insufficient-storage-178870/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-178870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-178870
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-178870: (2.001854721s)
--- PASS: TestInsufficientStorage (11.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.1742327139.exe start -p running-upgrade-330975 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.1742327139.exe start -p running-upgrade-330975 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (36.381172421s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-330975 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1108 00:03:32.522868  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1108 00:03:49.734196  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-330975 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.213152595s)
helpers_test.go:175: Cleaning up "running-upgrade-330975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-330975
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-330975: (2.887946147s)
--- PASS: TestRunningBinaryUpgrade (65.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1107 23:58:49.734036  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.958565228s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-247880
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-247880: (3.984591362s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-247880 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-247880 status --format={{.Host}}: exit status 7 (134.786502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1107 23:59:55.566816  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1108 00:00:42.547503  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m45.285121004s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-247880 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (134.797495ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-247880] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-247880
	    minikube start -p kubernetes-upgrade-247880 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2478802 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-247880 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-247880 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.340739776s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-247880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-247880
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-247880: (2.287648275s)
--- PASS: TestKubernetesUpgrade (385.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (215.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.1979214917.exe start -p missing-upgrade-085821 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.1979214917.exe start -p missing-upgrade-085821 --memory=2200 --driver=docker  --container-runtime=containerd: (1m48.698495304s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-085821
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-085821: (10.381507415s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-085821
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-085821 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-085821 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m31.693905232s)
helpers_test.go:175: Cleaning up "missing-upgrade-085821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-085821
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-085821: (2.372959205s)
--- PASS: TestMissingContainerUpgrade (215.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-466930 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-466930 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.659414ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-466930] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-466930 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-466930 --driver=docker  --container-runtime=containerd: (37.614660543s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-466930 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-466930 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-466930 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.640527274s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-466930 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-466930 status -o json: exit status 2 (458.57566ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-466930","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-466930
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-466930: (2.020280872s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-466930 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-466930 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.832932052s)
--- PASS: TestNoKubernetes/serial/Start (5.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-466930 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-466930 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.720739ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-466930
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-466930: (1.308269628s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-466930 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-466930 --driver=docker  --container-runtime=containerd: (7.864240521s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-466930 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-466930 "sudo systemctl is-active --quiet service kubelet": exit status 1 (309.141926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.347121308.exe start -p stopped-upgrade-736830 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.347121308.exe start -p stopped-upgrade-736830 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.565552208s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.347121308.exe -p stopped-upgrade-736830 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.347121308.exe -p stopped-upgrade-736830 stop: (20.09688988s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-736830 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-736830 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.030787218s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-736830
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-736830: (1.550726423s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.55s)

                                                
                                    
x
+
TestPause/serial/Start (67.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-529705 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-529705 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m7.488299226s)
--- PASS: TestPause/serial/Start (67.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-529705 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-529705 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.998343924s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.01s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-529705 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-529705 --alsologtostderr -v=5: (1.060697026s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-529705 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-529705 --output=json --layout=cluster: exit status 2 (472.493438ms)

                                                
                                                
-- stdout --
	{"Name":"pause-529705","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-529705","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-529705 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.25s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-529705 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-529705 --alsologtostderr -v=5: (1.247926185s)
--- PASS: TestPause/serial/PauseAgain (1.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-529705 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-529705 --alsologtostderr -v=5: (2.751799408s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-529705
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-529705: exit status 1 (25.832225ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-529705: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-729786 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-729786 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (304.848864ms)

                                                
                                                
-- stdout --
	* [false-729786] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:05:21.492540  402029 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:05:21.492811  402029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:05:21.492837  402029 out.go:309] Setting ErrFile to fd 2...
	I1108 00:05:21.492856  402029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:05:21.493178  402029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-253150/.minikube/bin
	I1108 00:05:21.493654  402029 out.go:303] Setting JSON to false
	I1108 00:05:21.494736  402029 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9868,"bootTime":1699392054,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1108 00:05:21.494846  402029 start.go:138] virtualization:  
	I1108 00:05:21.498752  402029 out.go:177] * [false-729786] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1108 00:05:21.500776  402029 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:05:21.500864  402029 notify.go:220] Checking for updates...
	I1108 00:05:21.502627  402029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:05:21.504475  402029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-253150/kubeconfig
	I1108 00:05:21.506550  402029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-253150/.minikube
	I1108 00:05:21.508070  402029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 00:05:21.509534  402029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:05:21.511897  402029 config.go:182] Loaded profile config "force-systemd-flag-450366": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 00:05:21.511998  402029 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:05:21.545063  402029 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1108 00:05:21.545199  402029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 00:05:21.679568  402029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-08 00:05:21.666118511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1108 00:05:21.679698  402029 docker.go:295] overlay module found
	I1108 00:05:21.681665  402029 out.go:177] * Using the docker driver based on user configuration
	I1108 00:05:21.683352  402029 start.go:298] selected driver: docker
	I1108 00:05:21.683368  402029 start.go:902] validating driver "docker" against <nil>
	I1108 00:05:21.683394  402029 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:05:21.685684  402029 out.go:177] 
	W1108 00:05:21.687358  402029 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1108 00:05:21.689066  402029 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-729786 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-729786" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-729786

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-729786"

                                                
                                                
----------------------- debugLogs end: false-729786 [took: 6.102756493s] --------------------------------
helpers_test.go:175: Cleaning up "false-729786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-729786
--- PASS: TestNetworkPlugins/group/false (6.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-056564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1108 00:08:32.523465  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1108 00:08:49.735068  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-056564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m5.393616856s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-056564 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e136c44-5817-43ae-8abb-e0f580bae91d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e136c44-5817-43ae-8abb-e0f580bae91d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.03016249s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-056564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-056564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-056564 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-056564 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-056564 --alsologtostderr -v=3: (12.222245652s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-056564 -n old-k8s-version-056564
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-056564 -n old-k8s-version-056564: exit status 7 (122.93779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-056564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (661.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-056564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-056564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m1.024781332s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-056564 -n old-k8s-version-056564
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (661.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-823375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:10:42.547507  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-823375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m25.557808299s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-823375 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5f832c42-3d65-460a-8cf1-0d15b62f381b] Pending
helpers_test.go:344: "busybox" [5f832c42-3d65-460a-8cf1-0d15b62f381b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5f832c42-3d65-460a-8cf1-0d15b62f381b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.038057955s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-823375 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-823375 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-823375 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.180280205s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-823375 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-823375 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-823375 --alsologtostderr -v=3: (12.223812583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-823375 -n no-preload-823375
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-823375 -n no-preload-823375: exit status 7 (94.447265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-823375 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-823375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:13:32.523444  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1108 00:13:45.592794  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1108 00:13:49.734233  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1108 00:15:42.547569  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1108 00:16:35.567997  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-823375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m37.449041658s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-823375 -n no-preload-823375
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-stkss" [92497f41-bb6e-4ba8-8952-324a5673388e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-stkss" [92497f41-bb6e-4ba8-8952-324a5673388e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.029140898s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-stkss" [92497f41-bb6e-4ba8-8952-324a5673388e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010654956s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-823375 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-823375 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-823375 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-823375 -n no-preload-823375
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-823375 -n no-preload-823375: exit status 2 (386.405645ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-823375 -n no-preload-823375
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-823375 -n no-preload-823375: exit status 2 (380.18822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-823375 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-823375 -n no-preload-823375
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-823375 -n no-preload-823375
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-882901 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-882901 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (59.901598442s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-882901 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ddcc1c6b-44b4-4363-b3db-77fd3d067c53] Pending
helpers_test.go:344: "busybox" [ddcc1c6b-44b4-4363-b3db-77fd3d067c53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ddcc1c6b-44b4-4363-b3db-77fd3d067c53] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.028902816s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-882901 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-882901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-882901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.120546351s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-882901 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-882901 --alsologtostderr -v=3
E1108 00:18:32.523619  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-882901 --alsologtostderr -v=3: (12.116706058s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-882901 -n embed-certs-882901
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-882901 -n embed-certs-882901: exit status 7 (101.854859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-882901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (339.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-882901 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:18:49.734785  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-882901 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m38.486190323s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-882901 -n embed-certs-882901
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (339.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mxtzq" [a20d61b1-4eab-4309-b54e-0b140561c556] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.030795851s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mxtzq" [a20d61b1-4eab-4309-b54e-0b140561c556] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017291972s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-056564 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-056564 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-056564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-056564 -n old-k8s-version-056564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-056564 -n old-k8s-version-056564: exit status 2 (369.262079ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-056564 -n old-k8s-version-056564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-056564 -n old-k8s-version-056564: exit status 2 (385.845347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-056564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-056564 -n old-k8s-version-056564
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-056564 -n old-k8s-version-056564
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-286975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:20:42.548085  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1108 00:20:58.824780  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:58.830115  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:58.840374  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:58.860547  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:58.900818  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:58.981465  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:59.141796  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:20:59.462361  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:21:00.103506  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:21:01.384442  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:21:03.945344  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:21:09.066294  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:21:19.306576  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-286975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (45.976867687s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-286975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [23a0d306-d457-4183-bd7f-bee302f65bd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [23a0d306-d457-4183-bd7f-bee302f65bd6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.03787838s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-286975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-286975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-286975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.164432846s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-286975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-286975 --alsologtostderr -v=3
E1108 00:21:39.787473  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-286975 --alsologtostderr -v=3: (12.193501102s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975: exit status 7 (98.112836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-286975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-286975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:22:20.748481  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:23:32.522456  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1108 00:23:32.780389  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1108 00:23:42.668654  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:23:49.734844  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1108 00:24:00.840434  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:00.845582  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:00.855804  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:00.876027  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:00.916252  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:00.996506  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:01.156865  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:01.477517  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:02.118353  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:03.398942  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:05.959156  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:11.079605  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:24:21.319978  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-286975 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m37.059275907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bxf87" [1bdfb51c-bef3-42ff-9583-054de76a91fc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bxf87" [1bdfb51c-bef3-42ff-9583-054de76a91fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.029725978s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bxf87" [1bdfb51c-bef3-42ff-9583-054de76a91fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010974858s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-882901 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-882901 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-882901 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-882901 -n embed-certs-882901
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-882901 -n embed-certs-882901: exit status 2 (373.284389ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-882901 -n embed-certs-882901
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-882901 -n embed-certs-882901: exit status 2 (397.381652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-882901 --alsologtostderr -v=1
E1108 00:24:41.800836  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-882901 -n embed-certs-882901
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-882901 -n embed-certs-882901
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-013765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:25:22.761613  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-013765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (46.033466727s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-013765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-013765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.248583401s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-013765 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-013765 --alsologtostderr -v=3: (1.281978407s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-013765 -n newest-cni-013765
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-013765 -n newest-cni-013765: exit status 7 (112.886749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-013765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-013765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1108 00:25:42.547822  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1108 00:25:58.825022  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-013765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (33.475846525s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-013765 -n newest-cni-013765
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-013765 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-013765 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-013765 -n newest-cni-013765
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-013765 -n newest-cni-013765: exit status 2 (402.12599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-013765 -n newest-cni-013765
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-013765 -n newest-cni-013765: exit status 2 (387.353909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-013765 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-013765 -n newest-cni-013765
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-013765 -n newest-cni-013765
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1108 00:26:26.509403  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:26:44.682554  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m24.238262616s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-svmhv" [aca7ada7-2cc3-499f-a8dc-c7e3e6c91e46] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-svmhv" [aca7ada7-2cc3-499f-a8dc-c7e3e6c91e46] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.026921664s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f4hmg" [d7f330c0-48b8-4f1a-82f6-a2976681a920] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f4hmg" [d7f330c0-48b8-4f1a-82f6-a2976681a920] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010976333s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-svmhv" [aca7ada7-2cc3-499f-a8dc-c7e3e6c91e46] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018778291s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-286975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-286975 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-286975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975: exit status 2 (363.129881ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975: exit status 2 (409.054968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-286975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-286975 -n default-k8s-diff-port-286975
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m36.372457705s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1108 00:28:32.522748  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
E1108 00:28:49.734548  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/addons-257591/client.crt: no such file or directory
E1108 00:29:00.840464  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:29:28.523178  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.518360622s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wt8qt" [56c3a9e3-f447-461e-b082-360293f05855] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028042606s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xfp25" [29fb7457-cd20-4c9e-a09e-3026c3d8016a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037861359s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-24b99" [4a1a3247-fc07-4f93-9c34-868cb3ff9360] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-24b99" [4a1a3247-fc07-4f93-9c34-868cb3ff9360] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.011872227s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q8gvt" [603fe2aa-3cde-4546-87f3-0f84047e607d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q8gvt" [603fe2aa-3cde-4546-87f3-0f84047e607d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.011096625s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m13.978887506s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1108 00:30:25.593045  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1108 00:30:42.547268  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/functional-662509/client.crt: no such file or directory
E1108 00:30:58.824681  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/no-preload-823375/client.crt: no such file or directory
E1108 00:31:28.225019  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.230385  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.240661  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.260904  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.301097  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.381467  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.541818  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:28.862275  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:29.503209  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:31:30.784229  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m29.235718467s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b7dkj" [f45d228a-8adf-41fd-9ead-8f77675f7ffc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 00:31:33.344887  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-b7dkj" [f45d228a-8adf-41fd-9ead-8f77675f7ffc] Running
E1108 00:31:38.465456  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.014921426s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z875l" [9646fea0-4384-4b42-af3d-c48870b51476] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 00:31:48.706449  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z875l" [9646fea0-4384-4b42-af3d-c48870b51476] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.011279189s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1108 00:32:09.186864  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m6.272431088s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1108 00:32:41.038922  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.044188  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.054496  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.075046  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.115267  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.195560  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.356601  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:41.677543  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:42.318586  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:43.599288  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:46.159819  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:32:50.147987  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/default-k8s-diff-port-286975/client.crt: no such file or directory
E1108 00:32:51.280917  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
E1108 00:33:01.521700  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-729786 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m28.620785735s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xwcm6" [a0322f1a-a869-4849-9867-2a344cc2af9b] Running
E1108 00:33:15.568618  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/ingress-addon-legacy-537363/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.041686362s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mxcgb" [4173f885-1a60-4d39-807b-f8bf6ba334ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 00:33:22.004817  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-mxcgb" [4173f885-1a60-4d39-807b-f8bf6ba334ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.014958347s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-729786 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-729786 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-57dtd" [80dbb1ee-1f0f-4b17-88b8-4fdd98dc5ede] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-57dtd" [80dbb1ee-1f0f-4b17-88b8-4fdd98dc5ede] Running
E1108 00:34:00.840559  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/old-k8s-version-056564/client.crt: no such file or directory
E1108 00:34:02.965498  258490 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-253150/.minikube/profiles/auto-729786/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.011620114s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-729786 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-729786 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (28/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-056831 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-056831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-056831
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-115802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-115802
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-729786 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-729786" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-729786

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-729786"

                                                
                                                
----------------------- debugLogs end: kubenet-729786 [took: 4.886856716s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-729786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-729786
--- SKIP: TestNetworkPlugins/group/kubenet (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-729786 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-729786" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-729786

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-729786" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-729786"

                                                
                                                
----------------------- debugLogs end: cilium-729786 [took: 6.557805691s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-729786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-729786
--- SKIP: TestNetworkPlugins/group/cilium (6.81s)

                                                
                                    
Copied to clipboard