Test Report: Docker_Linux_docker_arm64 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-17:30190
                    
                

Test fail (3/319)

Order failed test Duration
25 TestAddons/parallel/Ingress 37.88
102 TestFunctional/parallel/License 0.24
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 59.43
x
+
TestAddons/parallel/Ingress (37.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-534909 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-534909 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-534909 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [373e7f75-d8a1-47fb-ac84-5b05143c6261] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [373e7f75-d8a1-47fb-ac84-5b05143c6261] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.016277506s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-534909 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.043895223s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-534909 addons disable ingress --alsologtostderr -v=1: (8.090272452s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-534909
helpers_test.go:235: (dbg) docker inspect addons-534909:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7490e94dcb85d850f1b9b6e137b065044e47da407d163b27af8510f253b468f8",
	        "Created": "2023-07-17T22:46:56.687606217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1391020,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:46:57.043983282Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/7490e94dcb85d850f1b9b6e137b065044e47da407d163b27af8510f253b468f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7490e94dcb85d850f1b9b6e137b065044e47da407d163b27af8510f253b468f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7490e94dcb85d850f1b9b6e137b065044e47da407d163b27af8510f253b468f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7490e94dcb85d850f1b9b6e137b065044e47da407d163b27af8510f253b468f8/7490e94dcb85d850f1b9b6e137b065044e47da407d163b27af8510f253b468f8-json.log",
	        "Name": "/addons-534909",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-534909:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-534909",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/593374b5997650eaa2ddca1e7468542ae8dc5e4d840d12e3cfb9fd26f8473ab8-init/diff:/var/lib/docker/overlay2/fdc677bc34c4dd81c3e2a60b8c6dfef55cbcd01465515913bdab326c77319b46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/593374b5997650eaa2ddca1e7468542ae8dc5e4d840d12e3cfb9fd26f8473ab8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/593374b5997650eaa2ddca1e7468542ae8dc5e4d840d12e3cfb9fd26f8473ab8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/593374b5997650eaa2ddca1e7468542ae8dc5e4d840d12e3cfb9fd26f8473ab8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-534909",
	                "Source": "/var/lib/docker/volumes/addons-534909/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-534909",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-534909",
	                "name.minikube.sigs.k8s.io": "addons-534909",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "60fe57d76ff6dd83fe48832fcc527c27804f94b035f4e0d24ed69729bc78f163",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34326"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34325"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34322"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34324"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34323"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/60fe57d76ff6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-534909": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7490e94dcb85",
	                        "addons-534909"
	                    ],
	                    "NetworkID": "e46122ff98c964e83b3549dd63f225235399711d617d8d02ffa487981e7e2382",
	                    "EndpointID": "45c342c2cd85d684262a27302adbafe5feb9160fbd5127b77271d3e144d2fc6b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-534909 -n addons-534909
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-534909 logs -n 25: (1.345742707s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-516896   | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | -p download-only-516896        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-516896   | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | -p download-only-516896        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 22:46 UTC |
	| delete  | -p download-only-516896        | download-only-516896   | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 22:46 UTC |
	| delete  | -p download-only-516896        | download-only-516896   | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 22:46 UTC |
	| start   | --download-only -p             | download-docker-120890 | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | download-docker-120890         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| delete  | -p download-docker-120890      | download-docker-120890 | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 22:46 UTC |
	| start   | --download-only -p             | binary-mirror-731746   | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | binary-mirror-731746           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41699         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-731746        | binary-mirror-731746   | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 22:46 UTC |
	| start   | -p addons-534909               | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 22:49 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	|         | addons-534909                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	|         | -p addons-534909               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-534909 ip               | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	| addons  | addons-534909 addons disable   | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-534909 addons           | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	|         | addons-534909                  |                        |         |         |                     |                     |
	| ssh     | addons-534909 ssh curl -s      | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-534909 ip               | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:49 UTC | 17 Jul 23 22:49 UTC |
	| addons  | addons-534909 addons disable   | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:50 UTC | 17 Jul 23 22:50 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-534909 addons           | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:50 UTC | 17 Jul 23 22:50 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-534909 addons disable   | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:50 UTC | 17 Jul 23 22:50 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons  | addons-534909 addons           | addons-534909          | jenkins | v1.31.0 | 17 Jul 23 22:50 UTC | 17 Jul 23 22:50 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:46:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:46:32.698969 1390553 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:46:32.699124 1390553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:46:32.699135 1390553 out.go:309] Setting ErrFile to fd 2...
	I0717 22:46:32.699141 1390553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:46:32.699418 1390553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	I0717 22:46:32.699843 1390553 out.go:303] Setting JSON to false
	I0717 22:46:32.700782 1390553 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23340,"bootTime":1689610653,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 22:46:32.700849 1390553 start.go:138] virtualization:  
	I0717 22:46:32.703929 1390553 out.go:177] * [addons-534909] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 22:46:32.706699 1390553 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:46:32.708791 1390553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:46:32.706892 1390553 notify.go:220] Checking for updates...
	I0717 22:46:32.713286 1390553 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:46:32.715500 1390553 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	I0717 22:46:32.717585 1390553 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 22:46:32.719597 1390553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:46:32.721875 1390553 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:46:32.748513 1390553 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:46:32.748621 1390553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:46:32.835651 1390553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 22:46:32.825903283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:46:32.835767 1390553 docker.go:294] overlay module found
	I0717 22:46:32.838109 1390553 out.go:177] * Using the docker driver based on user configuration
	I0717 22:46:32.840224 1390553 start.go:298] selected driver: docker
	I0717 22:46:32.840240 1390553 start.go:880] validating driver "docker" against <nil>
	I0717 22:46:32.840253 1390553 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:46:32.840952 1390553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:46:32.911245 1390553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 22:46:32.900790816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:46:32.911415 1390553 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:46:32.911652 1390553 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:46:32.914085 1390553 out.go:177] * Using Docker driver with root privileges
	I0717 22:46:32.915898 1390553 cni.go:84] Creating CNI manager for ""
	I0717 22:46:32.915919 1390553 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 22:46:32.915936 1390553 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 22:46:32.915951 1390553 start_flags.go:319] config:
	{Name:addons-534909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-534909 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:46:32.918333 1390553 out.go:177] * Starting control plane node addons-534909 in cluster addons-534909
	I0717 22:46:32.920429 1390553 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 22:46:32.922637 1390553 out.go:177] * Pulling base image ...
	I0717 22:46:32.924600 1390553 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 22:46:32.924666 1390553 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0717 22:46:32.924681 1390553 cache.go:57] Caching tarball of preloaded images
	I0717 22:46:32.924682 1390553 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:46:32.924750 1390553 preload.go:174] Found /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 22:46:32.924760 1390553 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 22:46:32.925190 1390553 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/config.json ...
	I0717 22:46:32.925255 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/config.json: {Name:mkdebdedfc8bc03da86b69f99c0ec6d37a1fb501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:46:32.941567 1390553 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 22:46:32.941685 1390553 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 22:46:32.941710 1390553 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 22:46:32.941719 1390553 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 22:46:32.941729 1390553 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 22:46:32.941735 1390553 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0717 22:46:49.117430 1390553 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0717 22:46:49.117467 1390553 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:46:49.117520 1390553 start.go:365] acquiring machines lock for addons-534909: {Name:mk82b044dca1eac2b26dd75aee60c11ac8fe479a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:46:49.118284 1390553 start.go:369] acquired machines lock for "addons-534909" in 736.324µs
	I0717 22:46:49.118326 1390553 start.go:93] Provisioning new machine with config: &{Name:addons-534909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-534909 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 22:46:49.118427 1390553 start.go:125] createHost starting for "" (driver="docker")
	I0717 22:46:49.121229 1390553 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 22:46:49.121491 1390553 start.go:159] libmachine.API.Create for "addons-534909" (driver="docker")
	I0717 22:46:49.121520 1390553 client.go:168] LocalClient.Create starting
	I0717 22:46:49.121657 1390553 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem
	I0717 22:46:49.479471 1390553 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem
	I0717 22:46:50.186090 1390553 cli_runner.go:164] Run: docker network inspect addons-534909 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 22:46:50.206367 1390553 cli_runner.go:211] docker network inspect addons-534909 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 22:46:50.206453 1390553 network_create.go:281] running [docker network inspect addons-534909] to gather additional debugging logs...
	I0717 22:46:50.206473 1390553 cli_runner.go:164] Run: docker network inspect addons-534909
	W0717 22:46:50.224982 1390553 cli_runner.go:211] docker network inspect addons-534909 returned with exit code 1
	I0717 22:46:50.225017 1390553 network_create.go:284] error running [docker network inspect addons-534909]: docker network inspect addons-534909: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-534909 not found
	I0717 22:46:50.225030 1390553 network_create.go:286] output of [docker network inspect addons-534909]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-534909 not found
	
	** /stderr **
	I0717 22:46:50.225098 1390553 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:46:50.243461 1390553 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017428b0}
	I0717 22:46:50.243504 1390553 network_create.go:123] attempt to create docker network addons-534909 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 22:46:50.243564 1390553 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-534909 addons-534909
	I0717 22:46:50.325299 1390553 network_create.go:107] docker network addons-534909 192.168.49.0/24 created
	I0717 22:46:50.325334 1390553 kic.go:117] calculated static IP "192.168.49.2" for the "addons-534909" container
	I0717 22:46:50.325416 1390553 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:46:50.342353 1390553 cli_runner.go:164] Run: docker volume create addons-534909 --label name.minikube.sigs.k8s.io=addons-534909 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:46:50.366194 1390553 oci.go:103] Successfully created a docker volume addons-534909
	I0717 22:46:50.366298 1390553 cli_runner.go:164] Run: docker run --rm --name addons-534909-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-534909 --entrypoint /usr/bin/test -v addons-534909:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 22:46:52.569830 1390553 cli_runner.go:217] Completed: docker run --rm --name addons-534909-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-534909 --entrypoint /usr/bin/test -v addons-534909:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (2.203477187s)
	I0717 22:46:52.569864 1390553 oci.go:107] Successfully prepared a docker volume addons-534909
	I0717 22:46:52.569888 1390553 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 22:46:52.569907 1390553 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:46:52.570022 1390553 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-534909:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:46:56.604157 1390553 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-534909:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.0340847s)
	I0717 22:46:56.604191 1390553 kic.go:199] duration metric: took 4.034280 seconds to extract preloaded images to volume
	W0717 22:46:56.604348 1390553 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:46:56.604477 1390553 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:46:56.671567 1390553 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-534909 --name addons-534909 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-534909 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-534909 --network addons-534909 --ip 192.168.49.2 --volume addons-534909:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:46:57.053735 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Running}}
	I0717 22:46:57.081126 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:46:57.110795 1390553 cli_runner.go:164] Run: docker exec addons-534909 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:46:57.190713 1390553 oci.go:144] the created container "addons-534909" has a running status.
	I0717 22:46:57.190740 1390553 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa...
	I0717 22:46:58.058525 1390553 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:46:58.089171 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:46:58.112658 1390553 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:46:58.112679 1390553 kic_runner.go:114] Args: [docker exec --privileged addons-534909 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:46:58.221067 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:46:58.240469 1390553 machine.go:88] provisioning docker machine ...
	I0717 22:46:58.240500 1390553 ubuntu.go:169] provisioning hostname "addons-534909"
	I0717 22:46:58.240572 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:46:58.266193 1390553 main.go:141] libmachine: Using SSH client type: native
	I0717 22:46:58.266652 1390553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34326 <nil> <nil>}
	I0717 22:46:58.266664 1390553 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-534909 && echo "addons-534909" | sudo tee /etc/hostname
	I0717 22:46:58.431206 1390553 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-534909
	
	I0717 22:46:58.431292 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:46:58.451439 1390553 main.go:141] libmachine: Using SSH client type: native
	I0717 22:46:58.451889 1390553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34326 <nil> <nil>}
	I0717 22:46:58.451914 1390553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-534909' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-534909/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-534909' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:46:58.586172 1390553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:46:58.586198 1390553 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1384661/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1384661/.minikube}
	I0717 22:46:58.586217 1390553 ubuntu.go:177] setting up certificates
	I0717 22:46:58.586225 1390553 provision.go:83] configureAuth start
	I0717 22:46:58.586293 1390553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-534909
	I0717 22:46:58.606488 1390553 provision.go:138] copyHostCerts
	I0717 22:46:58.606574 1390553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.pem (1078 bytes)
	I0717 22:46:58.606699 1390553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1384661/.minikube/cert.pem (1123 bytes)
	I0717 22:46:58.606767 1390553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1384661/.minikube/key.pem (1679 bytes)
	I0717 22:46:58.606820 1390553 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca-key.pem org=jenkins.addons-534909 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-534909]
	I0717 22:46:59.150944 1390553 provision.go:172] copyRemoteCerts
	I0717 22:46:59.151051 1390553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:46:59.151096 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:46:59.168880 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:46:59.263570 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 22:46:59.295423 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:46:59.325098 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:46:59.354765 1390553 provision.go:86] duration metric: configureAuth took 768.520162ms
	I0717 22:46:59.354791 1390553 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:46:59.355003 1390553 config.go:182] Loaded profile config "addons-534909": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 22:46:59.355065 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:46:59.373069 1390553 main.go:141] libmachine: Using SSH client type: native
	I0717 22:46:59.373513 1390553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34326 <nil> <nil>}
	I0717 22:46:59.373529 1390553 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 22:46:59.506801 1390553 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 22:46:59.506863 1390553 ubuntu.go:71] root file system type: overlay
	I0717 22:46:59.506979 1390553 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 22:46:59.507052 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:46:59.526187 1390553 main.go:141] libmachine: Using SSH client type: native
	I0717 22:46:59.526627 1390553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34326 <nil> <nil>}
	I0717 22:46:59.526713 1390553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 22:46:59.668160 1390553 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 22:46:59.668278 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:46:59.687294 1390553 main.go:141] libmachine: Using SSH client type: native
	I0717 22:46:59.687741 1390553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34326 <nil> <nil>}
	I0717 22:46:59.687768 1390553 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 22:47:00.704523 1390553 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:51:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:46:59.661988264 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 22:47:00.704550 1390553 machine.go:91] provisioned docker machine in 2.464060494s
	I0717 22:47:00.704560 1390553 client.go:171] LocalClient.Create took 11.583035064s
	I0717 22:47:00.704571 1390553 start.go:167] duration metric: libmachine.API.Create for "addons-534909" took 11.583081431s
	I0717 22:47:00.704579 1390553 start.go:300] post-start starting for "addons-534909" (driver="docker")
	I0717 22:47:00.704609 1390553 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:47:00.704694 1390553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:47:00.704747 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:00.726624 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:00.824369 1390553 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:47:00.828660 1390553 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:47:00.828700 1390553 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:47:00.828713 1390553 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:47:00.828720 1390553 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 22:47:00.828729 1390553 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1384661/.minikube/addons for local assets ...
	I0717 22:47:00.828808 1390553 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1384661/.minikube/files for local assets ...
	I0717 22:47:00.828833 1390553 start.go:303] post-start completed in 124.226808ms
	I0717 22:47:00.829219 1390553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-534909
	I0717 22:47:00.846567 1390553 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/config.json ...
	I0717 22:47:00.846867 1390553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:47:00.846917 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:00.864617 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:00.954922 1390553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:47:00.960882 1390553 start.go:128] duration metric: createHost completed in 11.842436434s
	I0717 22:47:00.960908 1390553 start.go:83] releasing machines lock for "addons-534909", held for 11.842605362s
	I0717 22:47:00.960979 1390553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-534909
	I0717 22:47:00.981284 1390553 ssh_runner.go:195] Run: cat /version.json
	I0717 22:47:00.981343 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:00.981357 1390553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:47:00.981424 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:01.005031 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:01.007293 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:01.238730 1390553 ssh_runner.go:195] Run: systemctl --version
	I0717 22:47:01.244846 1390553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:47:01.251071 1390553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 22:47:01.283230 1390553 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:47:01.283368 1390553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:47:01.320236 1390553 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 22:47:01.320314 1390553 start.go:466] detecting cgroup driver to use...
	I0717 22:47:01.320363 1390553 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:47:01.320549 1390553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:47:01.342546 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 22:47:01.355645 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 22:47:01.368754 1390553 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 22:47:01.368820 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 22:47:01.381347 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:47:01.393864 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 22:47:01.405816 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:47:01.417910 1390553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:47:01.429660 1390553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 22:47:01.442006 1390553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:47:01.452834 1390553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:47:01.463319 1390553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:47:01.555356 1390553 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 22:47:01.664308 1390553 start.go:466] detecting cgroup driver to use...
	I0717 22:47:01.664384 1390553 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:47:01.664473 1390553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 22:47:01.683099 1390553 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 22:47:01.683193 1390553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 22:47:01.698003 1390553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:47:01.718843 1390553 ssh_runner.go:195] Run: which cri-dockerd
	I0717 22:47:01.724257 1390553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 22:47:01.735802 1390553 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 22:47:01.759002 1390553 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 22:47:01.867614 1390553 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 22:47:01.984826 1390553 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 22:47:01.984880 1390553 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 22:47:02.013708 1390553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:47:02.116908 1390553 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 22:47:02.418034 1390553 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 22:47:02.515546 1390553 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 22:47:02.609861 1390553 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 22:47:02.715581 1390553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:47:02.822014 1390553 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 22:47:02.840775 1390553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:47:02.944826 1390553 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 22:47:03.038681 1390553 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 22:47:03.038856 1390553 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 22:47:03.045740 1390553 start.go:534] Will wait 60s for crictl version
	I0717 22:47:03.045864 1390553 ssh_runner.go:195] Run: which crictl
	I0717 22:47:03.052047 1390553 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:47:03.118286 1390553 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 22:47:03.118448 1390553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 22:47:03.156188 1390553 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 22:47:03.188091 1390553 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 22:47:03.188224 1390553 cli_runner.go:164] Run: docker network inspect addons-534909 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:47:03.206293 1390553 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 22:47:03.210911 1390553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:47:03.224723 1390553 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 22:47:03.224797 1390553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 22:47:03.246544 1390553 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 22:47:03.246567 1390553 docker.go:566] Images already preloaded, skipping extraction
	I0717 22:47:03.246633 1390553 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 22:47:03.269382 1390553 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 22:47:03.269406 1390553 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:47:03.269470 1390553 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 22:47:03.332093 1390553 cni.go:84] Creating CNI manager for ""
	I0717 22:47:03.332115 1390553 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 22:47:03.332149 1390553 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:47:03.332169 1390553 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-534909 NodeName:addons-534909 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:47:03.332306 1390553 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-534909"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:47:03.332373 1390553 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-534909 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-534909 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:47:03.332447 1390553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:47:03.343545 1390553 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:47:03.343621 1390553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:47:03.354403 1390553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0717 22:47:03.375720 1390553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:47:03.397716 1390553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 22:47:03.420348 1390553 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:47:03.425144 1390553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:47:03.439375 1390553 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909 for IP: 192.168.49.2
	I0717 22:47:03.439411 1390553 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fe46c8df27a790849650201176fd556c5399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:03.439591 1390553 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key
	I0717 22:47:04.170900 1390553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt ...
	I0717 22:47:04.170931 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt: {Name:mk4ca1c4c738288c47fe8e6b1d21f1c64d7fc3c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:04.171127 1390553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key ...
	I0717 22:47:04.171139 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key: {Name:mkd6cf44ef97ce49d6a26e8ab0bfe4d6c2ac5aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:04.171220 1390553 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key
	I0717 22:47:05.000652 1390553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.crt ...
	I0717 22:47:05.000692 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.crt: {Name:mke0bfd3b257a55618135eff1f676cb9caae2f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.000935 1390553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key ...
	I0717 22:47:05.000945 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key: {Name:mk543e9638763d6bf994c9a901f348b384651fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.001059 1390553 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.key
	I0717 22:47:05.001096 1390553 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt with IP's: []
	I0717 22:47:05.197494 1390553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt ...
	I0717 22:47:05.197532 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: {Name:mkc874c4c9c34839274171d91e0816b14a01ec4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.197715 1390553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.key ...
	I0717 22:47:05.197730 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.key: {Name:mk281af899eaf0ce525a6faf50c9007f6188d34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.197814 1390553 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.key.dd3b5fb2
	I0717 22:47:05.197836 1390553 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:47:05.426792 1390553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.crt.dd3b5fb2 ...
	I0717 22:47:05.426823 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.crt.dd3b5fb2: {Name:mkbf261a7b88e2d33cb1852ba473661fad938cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.427018 1390553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.key.dd3b5fb2 ...
	I0717 22:47:05.427032 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.key.dd3b5fb2: {Name:mk0256914cf38ae95aaa0450b4f2091eee78d131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.427114 1390553 certs.go:337] copying /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.crt
	I0717 22:47:05.427185 1390553 certs.go:341] copying /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.key
	I0717 22:47:05.427235 1390553 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.key
	I0717 22:47:05.427254 1390553 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.crt with IP's: []
	I0717 22:47:05.725292 1390553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.crt ...
	I0717 22:47:05.725325 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.crt: {Name:mk1537a4cbf9474038811c5db091a9343a08d99d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.725523 1390553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.key ...
	I0717 22:47:05.725536 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.key: {Name:mk9daf9b30922065c6caadde5dcb112542728527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:05.725741 1390553 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:47:05.725785 1390553 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:47:05.725814 1390553 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:47:05.725848 1390553 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem (1679 bytes)
	I0717 22:47:05.726460 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:47:05.757946 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:47:05.786760 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:47:05.815667 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:47:05.844042 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:47:05.873403 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:47:05.902521 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:47:05.931138 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:47:05.960378 1390553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:47:05.990054 1390553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:47:06.016965 1390553 ssh_runner.go:195] Run: openssl version
	I0717 22:47:06.025026 1390553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:47:06.038982 1390553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:47:06.044449 1390553 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:47:06.044520 1390553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:47:06.054337 1390553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:47:06.068107 1390553 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:47:06.073181 1390553 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:47:06.073233 1390553 kubeadm.go:404] StartCluster: {Name:addons-534909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-534909 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:06.073379 1390553 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 22:47:06.095896 1390553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:47:06.108173 1390553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:47:06.120504 1390553 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 22:47:06.120578 1390553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:47:06.133008 1390553 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:47:06.133054 1390553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 22:47:06.189788 1390553 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:47:06.189886 1390553 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:47:06.253512 1390553 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:47:06.253603 1390553 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 22:47:06.253657 1390553 kubeadm.go:322] OS: Linux
	I0717 22:47:06.253720 1390553 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 22:47:06.253784 1390553 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 22:47:06.253847 1390553 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 22:47:06.253910 1390553 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 22:47:06.253976 1390553 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 22:47:06.254045 1390553 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 22:47:06.254115 1390553 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 22:47:06.254181 1390553 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 22:47:06.254248 1390553 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 22:47:06.336796 1390553 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:47:06.336922 1390553 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:47:06.337016 1390553 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:47:06.692379 1390553 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:47:06.698155 1390553 out.go:204]   - Generating certificates and keys ...
	I0717 22:47:06.698326 1390553 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:47:06.698395 1390553 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:47:06.973156 1390553 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:47:08.224575 1390553 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:47:08.964110 1390553 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 22:47:09.248177 1390553 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 22:47:09.682953 1390553 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 22:47:09.683379 1390553 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-534909 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 22:47:10.243370 1390553 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 22:47:10.244051 1390553 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-534909 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 22:47:10.612929 1390553 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:47:11.036069 1390553 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:47:11.944874 1390553 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 22:47:11.945210 1390553 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:47:12.495095 1390553 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:47:13.224253 1390553 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:47:13.636649 1390553 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:47:14.252658 1390553 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:47:14.268665 1390553 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:47:14.270110 1390553 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:47:14.270192 1390553 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:47:14.394605 1390553 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:47:14.397314 1390553 out.go:204]   - Booting up control plane ...
	I0717 22:47:14.397418 1390553 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:47:14.397494 1390553 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:47:14.397907 1390553 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:47:14.399230 1390553 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:47:14.402583 1390553 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:47:23.406070 1390553 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002913 seconds
	I0717 22:47:23.406184 1390553 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:47:23.422367 1390553 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:47:23.948239 1390553 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:47:23.948427 1390553 kubeadm.go:322] [mark-control-plane] Marking the node addons-534909 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:47:24.461960 1390553 kubeadm.go:322] [bootstrap-token] Using token: igit2u.winhse0geavpeojr
	I0717 22:47:24.465797 1390553 out.go:204]   - Configuring RBAC rules ...
	I0717 22:47:24.465978 1390553 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:47:24.472342 1390553 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:47:24.482064 1390553 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:47:24.486597 1390553 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:47:24.490697 1390553 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:47:24.495174 1390553 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:47:24.509893 1390553 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:47:24.770476 1390553 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:47:24.883343 1390553 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:47:24.885676 1390553 kubeadm.go:322] 
	I0717 22:47:24.885745 1390553 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:47:24.885759 1390553 kubeadm.go:322] 
	I0717 22:47:24.885833 1390553 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:47:24.885838 1390553 kubeadm.go:322] 
	I0717 22:47:24.885862 1390553 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:47:24.885917 1390553 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:47:24.885967 1390553 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:47:24.885972 1390553 kubeadm.go:322] 
	I0717 22:47:24.886023 1390553 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:47:24.886027 1390553 kubeadm.go:322] 
	I0717 22:47:24.886072 1390553 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:47:24.886076 1390553 kubeadm.go:322] 
	I0717 22:47:24.886125 1390553 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:47:24.886195 1390553 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:47:24.886260 1390553 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:47:24.886264 1390553 kubeadm.go:322] 
	I0717 22:47:24.886343 1390553 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:47:24.886415 1390553 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:47:24.886419 1390553 kubeadm.go:322] 
	I0717 22:47:24.886503 1390553 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token igit2u.winhse0geavpeojr \
	I0717 22:47:24.886601 1390553 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e5d5c8c9181b8ed72220af3cac9466140f0edb69a687eef1ac98c0aceaf43e58 \
	I0717 22:47:24.886620 1390553 kubeadm.go:322] 	--control-plane 
	I0717 22:47:24.886624 1390553 kubeadm.go:322] 
	I0717 22:47:24.886709 1390553 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:47:24.886713 1390553 kubeadm.go:322] 
	I0717 22:47:24.886790 1390553 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token igit2u.winhse0geavpeojr \
	I0717 22:47:24.886885 1390553 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e5d5c8c9181b8ed72220af3cac9466140f0edb69a687eef1ac98c0aceaf43e58 
	I0717 22:47:24.894799 1390553 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 22:47:24.895014 1390553 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:47:24.895049 1390553 cni.go:84] Creating CNI manager for ""
	I0717 22:47:24.895089 1390553 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 22:47:24.898572 1390553 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 22:47:24.900584 1390553 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:47:24.906446 1390553 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:47:24.906472 1390553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:47:24.937271 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:47:26.021785 1390553 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.084468748s)
	I0717 22:47:26.021841 1390553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:47:26.021962 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:26.021973 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=addons-534909 minikube.k8s.io/updated_at=2023_07_17T22_47_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:26.240162 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:26.240221 1390553 ops.go:34] apiserver oom_adj: -16
	I0717 22:47:26.843915 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:27.343978 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:27.844160 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:28.343388 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:28.843629 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:29.344045 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:29.844179 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:30.343345 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:30.843911 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:31.343310 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:31.844236 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:32.344016 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:32.843383 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:33.343423 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:33.844073 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:34.344037 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:34.844212 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:35.343331 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:35.843537 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:36.343778 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:36.843360 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:37.344017 1390553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:47:37.521850 1390553 kubeadm.go:1081] duration metric: took 11.499960953s to wait for elevateKubeSystemPrivileges.
	I0717 22:47:37.521878 1390553 kubeadm.go:406] StartCluster complete in 31.448651598s
	I0717 22:47:37.521895 1390553 settings.go:142] acquiring lock: {Name:mkc0c7943c743f0a2c4e51e89031f3fcf4ae225e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:37.522028 1390553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:47:37.522386 1390553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/kubeconfig: {Name:mk792c43221d3b29507daafdb089ed87fdff17a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:47:37.522575 1390553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:47:37.522853 1390553 config.go:182] Loaded profile config "addons-534909": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 22:47:37.522999 1390553 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 22:47:37.523080 1390553 addons.go:69] Setting volumesnapshots=true in profile "addons-534909"
	I0717 22:47:37.523094 1390553 addons.go:231] Setting addon volumesnapshots=true in "addons-534909"
	I0717 22:47:37.523150 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.523598 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.524097 1390553 addons.go:69] Setting cloud-spanner=true in profile "addons-534909"
	I0717 22:47:37.524117 1390553 addons.go:231] Setting addon cloud-spanner=true in "addons-534909"
	I0717 22:47:37.524150 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.524550 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.524928 1390553 addons.go:69] Setting inspektor-gadget=true in profile "addons-534909"
	I0717 22:47:37.524956 1390553 addons.go:231] Setting addon inspektor-gadget=true in "addons-534909"
	I0717 22:47:37.524990 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.525386 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.525466 1390553 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-534909"
	I0717 22:47:37.525496 1390553 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-534909"
	I0717 22:47:37.525525 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.525872 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.525933 1390553 addons.go:69] Setting default-storageclass=true in profile "addons-534909"
	I0717 22:47:37.525948 1390553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-534909"
	I0717 22:47:37.526154 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.526214 1390553 addons.go:69] Setting gcp-auth=true in profile "addons-534909"
	I0717 22:47:37.526229 1390553 mustload.go:65] Loading cluster: addons-534909
	I0717 22:47:37.526383 1390553 config.go:182] Loaded profile config "addons-534909": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 22:47:37.526580 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.526638 1390553 addons.go:69] Setting ingress=true in profile "addons-534909"
	I0717 22:47:37.526652 1390553 addons.go:231] Setting addon ingress=true in "addons-534909"
	I0717 22:47:37.526683 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.527047 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.527106 1390553 addons.go:69] Setting ingress-dns=true in profile "addons-534909"
	I0717 22:47:37.527123 1390553 addons.go:231] Setting addon ingress-dns=true in "addons-534909"
	I0717 22:47:37.527161 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.527581 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.527700 1390553 addons.go:69] Setting storage-provisioner=true in profile "addons-534909"
	I0717 22:47:37.527736 1390553 addons.go:231] Setting addon storage-provisioner=true in "addons-534909"
	I0717 22:47:37.527778 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.528150 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.528228 1390553 addons.go:69] Setting metrics-server=true in profile "addons-534909"
	I0717 22:47:37.528251 1390553 addons.go:231] Setting addon metrics-server=true in "addons-534909"
	I0717 22:47:37.528288 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.528666 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.560950 1390553 addons.go:69] Setting registry=true in profile "addons-534909"
	I0717 22:47:37.560990 1390553 addons.go:231] Setting addon registry=true in "addons-534909"
	I0717 22:47:37.561037 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.562094 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.594812 1390553 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 22:47:37.623754 1390553 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:47:37.623783 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:47:37.623850 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.635491 1390553 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 22:47:37.639499 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 22:47:37.641885 1390553 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 22:47:37.641908 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 22:47:37.641974 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.645672 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 22:47:37.647800 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 22:47:37.649665 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 22:47:37.652962 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 22:47:37.655055 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 22:47:37.657098 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 22:47:37.656224 1390553 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 22:47:37.662434 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 22:47:37.662632 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.664567 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 22:47:37.662406 1390553 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 22:47:37.671678 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 22:47:37.671702 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 22:47:37.671768 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.674908 1390553 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 22:47:37.677036 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 22:47:37.677063 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 22:47:37.677133 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.726652 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.791631 1390553 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 22:47:37.790091 1390553 addons.go:231] Setting addon default-storageclass=true in "addons-534909"
	I0717 22:47:37.790140 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:37.799600 1390553 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 22:47:37.797264 1390553 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 22:47:37.797291 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:37.804535 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:37.821416 1390553 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 22:47:37.823284 1390553 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 22:47:37.825534 1390553 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 22:47:37.829646 1390553 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 22:47:37.829705 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 22:47:37.829787 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.837069 1390553 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 22:47:37.820769 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 22:47:37.842173 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.842344 1390553 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 22:47:37.842352 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 22:47:37.842382 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.872073 1390553 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:47:37.874415 1390553 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:47:37.874434 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:47:37.874496 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:37.896902 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:37.911726 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:37.920655 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:37.938618 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:37.974814 1390553 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:47:37.974836 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:47:37.974897 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:38.050201 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:38.051882 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:38.058731 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:38.081165 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:38.088507 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:38.167770 1390553 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-534909" context rescaled to 1 replicas
	I0717 22:47:38.167806 1390553 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 22:47:38.171283 1390553 out.go:177] * Verifying Kubernetes components...
	I0717 22:47:38.173047 1390553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:47:38.188368 1390553 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:47:38.188389 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 22:47:38.399216 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 22:47:38.506048 1390553 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:47:38.506109 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:47:38.734019 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:47:38.791834 1390553 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.269226055s)
	I0717 22:47:38.792211 1390553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:47:38.793144 1390553 node_ready.go:35] waiting up to 6m0s for node "addons-534909" to be "Ready" ...
	I0717 22:47:38.797395 1390553 node_ready.go:49] node "addons-534909" has status "Ready":"True"
	I0717 22:47:38.797419 1390553 node_ready.go:38] duration metric: took 4.211579ms waiting for node "addons-534909" to be "Ready" ...
	I0717 22:47:38.797428 1390553 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:47:38.807935 1390553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:38.889576 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:47:38.893164 1390553 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:47:38.893189 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:47:38.955546 1390553 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 22:47:38.955573 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 22:47:39.201576 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 22:47:39.201608 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 22:47:39.279691 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 22:47:39.308248 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 22:47:39.320795 1390553 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 22:47:39.320880 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 22:47:39.329447 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:47:39.414020 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 22:47:39.414047 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 22:47:39.505861 1390553 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 22:47:39.505885 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 22:47:39.698426 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 22:47:39.698501 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 22:47:39.813880 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 22:47:39.813951 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 22:47:39.880317 1390553 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 22:47:39.880401 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 22:47:39.963699 1390553 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 22:47:39.963770 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 22:47:39.991659 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 22:47:39.991733 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 22:47:40.218645 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 22:47:40.232278 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 22:47:40.232345 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 22:47:40.353323 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 22:47:40.353351 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 22:47:40.423240 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 22:47:40.423270 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 22:47:40.603892 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 22:47:40.603917 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 22:47:40.816757 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 22:47:40.816840 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 22:47:40.824154 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:40.835559 1390553 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 22:47:40.835583 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 22:47:40.866515 1390553 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 22:47:40.866606 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 22:47:41.060031 1390553 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 22:47:41.060057 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 22:47:41.155286 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 22:47:41.155321 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 22:47:41.201019 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 22:47:41.277905 1390553 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 22:47:41.277940 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 22:47:41.331642 1390553 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 22:47:41.331681 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 22:47:41.408885 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.009568876s)
	I0717 22:47:41.596545 1390553 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 22:47:41.596576 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 22:47:41.657043 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 22:47:41.722102 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.98800208s)
	I0717 22:47:41.730561 1390553 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 22:47:41.730586 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 22:47:41.756290 1390553 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.964031367s)
	I0717 22:47:41.756322 1390553 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 22:47:42.060992 1390553 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 22:47:42.061041 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 22:47:42.257592 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 22:47:42.831217 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:43.310441 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.420820301s)
	I0717 22:47:43.310550 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.030827948s)
	I0717 22:47:44.544976 1390553 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 22:47:44.545071 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:44.586736 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:44.840099 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:45.794813 1390553 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 22:47:45.870618 1390553 addons.go:231] Setting addon gcp-auth=true in "addons-534909"
	I0717 22:47:45.870675 1390553 host.go:66] Checking if "addons-534909" exists ...
	I0717 22:47:45.871121 1390553 cli_runner.go:164] Run: docker container inspect addons-534909 --format={{.State.Status}}
	I0717 22:47:45.914739 1390553 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 22:47:45.914794 1390553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-534909
	I0717 22:47:45.946710 1390553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/addons-534909/id_rsa Username:docker}
	I0717 22:47:46.853550 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:49.279550 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.971259248s)
	I0717 22:47:49.279668 1390553 addons.go:467] Verifying addon ingress=true in "addons-534909"
	I0717 22:47:49.279756 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.950267946s)
	I0717 22:47:49.279772 1390553 addons.go:467] Verifying addon metrics-server=true in "addons-534909"
	I0717 22:47:49.283173 1390553 out.go:177] * Verifying ingress addon...
	I0717 22:47:49.279807 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.061090758s)
	I0717 22:47:49.279952 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.07889653s)
	I0717 22:47:49.280080 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.623002673s)
	I0717 22:47:49.285352 1390553 addons.go:467] Verifying addon registry=true in "addons-534909"
	I0717 22:47:49.288305 1390553 out.go:177] * Verifying registry addon...
	W0717 22:47:49.285801 1390553 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 22:47:49.286564 1390553 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 22:47:49.291151 1390553 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 22:47:49.291317 1390553 retry.go:31] will retry after 293.482126ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 22:47:49.312559 1390553 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 22:47:49.312588 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:49.316957 1390553 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 22:47:49.316977 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:49.362389 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:49.585644 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 22:47:49.832059 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:49.836117 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:50.335387 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:50.369493 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:50.830677 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:50.847788 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:51.338500 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:51.356444 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:51.379772 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:51.903095 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:51.960573 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:52.040679 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.783019758s)
	I0717 22:47:52.040956 1390553 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-534909"
	I0717 22:47:52.040804 1390553 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.126044817s)
	I0717 22:47:52.043948 1390553 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 22:47:52.046554 1390553 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 22:47:52.048796 1390553 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 22:47:52.051903 1390553 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 22:47:52.054645 1390553 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 22:47:52.054677 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 22:47:52.090894 1390553 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 22:47:52.090929 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:52.292986 1390553 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 22:47:52.293013 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 22:47:52.321477 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:52.330228 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:52.584239 1390553 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 22:47:52.584264 1390553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 22:47:52.606288 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:52.697293 1390553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 22:47:52.817113 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:52.824897 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:53.098036 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:53.320975 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:53.327396 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:53.599218 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:53.847178 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:53.849633 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:53.859499 1390553 pod_ready.go:102] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"False"
	I0717 22:47:54.047720 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.462007974s)
	I0717 22:47:54.099536 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:54.351532 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:54.352133 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:54.481286 1390553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.783915136s)
	I0717 22:47:54.482869 1390553 addons.go:467] Verifying addon gcp-auth=true in "addons-534909"
	I0717 22:47:54.485526 1390553 out.go:177] * Verifying gcp-auth addon...
	I0717 22:47:54.488177 1390553 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 22:47:54.502366 1390553 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 22:47:54.502434 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:54.612630 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:54.821343 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:54.827797 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:55.011750 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:55.099532 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:55.318287 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:55.322840 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:55.507202 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:55.597712 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:55.818086 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:55.823102 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:55.824419 1390553 pod_ready.go:92] pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace has status "Ready":"True"
	I0717 22:47:55.824442 1390553 pod_ready.go:81] duration metric: took 17.016420049s waiting for pod "coredns-5d78c9869d-l79hd" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.824486 1390553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-m42dc" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.827032 1390553 pod_ready.go:97] error getting pod "coredns-5d78c9869d-m42dc" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-m42dc" not found
	I0717 22:47:55.827058 1390553 pod_ready.go:81] duration metric: took 2.558346ms waiting for pod "coredns-5d78c9869d-m42dc" in "kube-system" namespace to be "Ready" ...
	E0717 22:47:55.827088 1390553 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-m42dc" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-m42dc" not found
	I0717 22:47:55.827097 1390553 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.833756 1390553 pod_ready.go:92] pod "etcd-addons-534909" in "kube-system" namespace has status "Ready":"True"
	I0717 22:47:55.833777 1390553 pod_ready.go:81] duration metric: took 6.669592ms waiting for pod "etcd-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.833793 1390553 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.841894 1390553 pod_ready.go:92] pod "kube-apiserver-addons-534909" in "kube-system" namespace has status "Ready":"True"
	I0717 22:47:55.841914 1390553 pod_ready.go:81] duration metric: took 8.113972ms waiting for pod "kube-apiserver-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.841926 1390553 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.850730 1390553 pod_ready.go:92] pod "kube-controller-manager-addons-534909" in "kube-system" namespace has status "Ready":"True"
	I0717 22:47:55.850754 1390553 pod_ready.go:81] duration metric: took 8.819771ms waiting for pod "kube-controller-manager-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:55.850767 1390553 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hsstj" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:56.011689 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:56.019628 1390553 pod_ready.go:92] pod "kube-proxy-hsstj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:47:56.019655 1390553 pod_ready.go:81] duration metric: took 168.859142ms waiting for pod "kube-proxy-hsstj" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:56.019668 1390553 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:56.099061 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:56.318234 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:56.323251 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:56.420032 1390553 pod_ready.go:92] pod "kube-scheduler-addons-534909" in "kube-system" namespace has status "Ready":"True"
	I0717 22:47:56.420057 1390553 pod_ready.go:81] duration metric: took 400.34802ms waiting for pod "kube-scheduler-addons-534909" in "kube-system" namespace to be "Ready" ...
	I0717 22:47:56.420066 1390553 pod_ready.go:38] duration metric: took 17.622629149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:47:56.420117 1390553 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:47:56.420204 1390553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:47:56.440935 1390553 api_server.go:72] duration metric: took 18.273098763s to wait for apiserver process to appear ...
	I0717 22:47:56.440961 1390553 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:47:56.441005 1390553 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 22:47:56.451482 1390553 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 22:47:56.452842 1390553 api_server.go:141] control plane version: v1.27.3
	I0717 22:47:56.452936 1390553 api_server.go:131] duration metric: took 11.967034ms to wait for apiserver health ...
	I0717 22:47:56.452959 1390553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:47:56.508749 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:56.596988 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:56.639929 1390553 system_pods.go:59] 17 kube-system pods found
	I0717 22:47:56.640005 1390553 system_pods.go:61] "coredns-5d78c9869d-l79hd" [5d8d430f-76b4-4994-af76-b34104d96664] Running
	I0717 22:47:56.640033 1390553 system_pods.go:61] "csi-hostpath-attacher-0" [687901d1-bfd1-47ea-bbbe-315a7ea41e55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 22:47:56.640060 1390553 system_pods.go:61] "csi-hostpath-resizer-0" [bcd6adb8-7e5f-48ae-97a1-1819c87eefa1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 22:47:56.640084 1390553 system_pods.go:61] "csi-hostpathplugin-hfql7" [1e854197-f366-41a8-b3c4-66e7cea140fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 22:47:56.640106 1390553 system_pods.go:61] "etcd-addons-534909" [8c51d1f9-9653-4d9c-9ded-2cba5f49f1c5] Running
	I0717 22:47:56.640130 1390553 system_pods.go:61] "kindnet-22cwn" [04232e70-7870-4c3e-b59c-91f073afb0ef] Running
	I0717 22:47:56.640149 1390553 system_pods.go:61] "kube-apiserver-addons-534909" [9e3c8759-3eb6-4c49-9032-f37f0b118e8c] Running
	I0717 22:47:56.640179 1390553 system_pods.go:61] "kube-controller-manager-addons-534909" [814d7e64-6eae-4566-9d91-d097b85a5d62] Running
	I0717 22:47:56.640207 1390553 system_pods.go:61] "kube-ingress-dns-minikube" [7e51cbba-550f-48a0-b8d7-355f135fafe7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 22:47:56.640229 1390553 system_pods.go:61] "kube-proxy-hsstj" [36812159-aeae-4112-83f6-7b290698b9b4] Running
	I0717 22:47:56.640251 1390553 system_pods.go:61] "kube-scheduler-addons-534909" [c9523338-8f32-411d-83dd-834febc825e0] Running
	I0717 22:47:56.640280 1390553 system_pods.go:61] "metrics-server-844d8db974-42sjp" [b402c187-9b1c-4e91-b96c-b1cadf466549] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:47:56.640307 1390553 system_pods.go:61] "registry-proxy-624gr" [39384673-9aaa-4d0e-9753-3aa82af62305] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 22:47:56.640330 1390553 system_pods.go:61] "registry-zwjdj" [81ea78ea-50d4-4d00-b0e6-6217c7bc9dba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 22:47:56.640355 1390553 system_pods.go:61] "snapshot-controller-75bbb956b9-4lw7g" [ee67f948-bb93-4ba0-b861-64634e78907a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 22:47:56.640388 1390553 system_pods.go:61] "snapshot-controller-75bbb956b9-lw6dv" [f74779a7-7c69-4062-8655-2b24c056e38c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 22:47:56.640412 1390553 system_pods.go:61] "storage-provisioner" [3d132189-a07e-4573-b1c9-ad11aeb66a8b] Running
	I0717 22:47:56.640433 1390553 system_pods.go:74] duration metric: took 187.457729ms to wait for pod list to return data ...
	I0717 22:47:56.640463 1390553 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:47:56.817108 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:56.819739 1390553 default_sa.go:45] found service account: "default"
	I0717 22:47:56.819764 1390553 default_sa.go:55] duration metric: took 179.275991ms for default service account to be created ...
	I0717 22:47:56.819774 1390553 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:47:56.822861 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:57.007986 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:57.027578 1390553 system_pods.go:86] 17 kube-system pods found
	I0717 22:47:57.027621 1390553 system_pods.go:89] "coredns-5d78c9869d-l79hd" [5d8d430f-76b4-4994-af76-b34104d96664] Running
	I0717 22:47:57.027634 1390553 system_pods.go:89] "csi-hostpath-attacher-0" [687901d1-bfd1-47ea-bbbe-315a7ea41e55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 22:47:57.027645 1390553 system_pods.go:89] "csi-hostpath-resizer-0" [bcd6adb8-7e5f-48ae-97a1-1819c87eefa1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 22:47:57.027653 1390553 system_pods.go:89] "csi-hostpathplugin-hfql7" [1e854197-f366-41a8-b3c4-66e7cea140fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 22:47:57.027664 1390553 system_pods.go:89] "etcd-addons-534909" [8c51d1f9-9653-4d9c-9ded-2cba5f49f1c5] Running
	I0717 22:47:57.027671 1390553 system_pods.go:89] "kindnet-22cwn" [04232e70-7870-4c3e-b59c-91f073afb0ef] Running
	I0717 22:47:57.027682 1390553 system_pods.go:89] "kube-apiserver-addons-534909" [9e3c8759-3eb6-4c49-9032-f37f0b118e8c] Running
	I0717 22:47:57.027688 1390553 system_pods.go:89] "kube-controller-manager-addons-534909" [814d7e64-6eae-4566-9d91-d097b85a5d62] Running
	I0717 22:47:57.027697 1390553 system_pods.go:89] "kube-ingress-dns-minikube" [7e51cbba-550f-48a0-b8d7-355f135fafe7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 22:47:57.027705 1390553 system_pods.go:89] "kube-proxy-hsstj" [36812159-aeae-4112-83f6-7b290698b9b4] Running
	I0717 22:47:57.027711 1390553 system_pods.go:89] "kube-scheduler-addons-534909" [c9523338-8f32-411d-83dd-834febc825e0] Running
	I0717 22:47:57.027725 1390553 system_pods.go:89] "metrics-server-844d8db974-42sjp" [b402c187-9b1c-4e91-b96c-b1cadf466549] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:47:57.027734 1390553 system_pods.go:89] "registry-proxy-624gr" [39384673-9aaa-4d0e-9753-3aa82af62305] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 22:47:57.027745 1390553 system_pods.go:89] "registry-zwjdj" [81ea78ea-50d4-4d00-b0e6-6217c7bc9dba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 22:47:57.027757 1390553 system_pods.go:89] "snapshot-controller-75bbb956b9-4lw7g" [ee67f948-bb93-4ba0-b861-64634e78907a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 22:47:57.027766 1390553 system_pods.go:89] "snapshot-controller-75bbb956b9-lw6dv" [f74779a7-7c69-4062-8655-2b24c056e38c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 22:47:57.027772 1390553 system_pods.go:89] "storage-provisioner" [3d132189-a07e-4573-b1c9-ad11aeb66a8b] Running
	I0717 22:47:57.027785 1390553 system_pods.go:126] duration metric: took 208.005756ms to wait for k8s-apps to be running ...
	I0717 22:47:57.027793 1390553 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:47:57.027858 1390553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:47:57.053241 1390553 system_svc.go:56] duration metric: took 25.437328ms WaitForService to wait for kubelet.
	I0717 22:47:57.053267 1390553 kubeadm.go:581] duration metric: took 18.885437187s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:47:57.053291 1390553 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:47:57.099127 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:57.219773 1390553 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 22:47:57.219806 1390553 node_conditions.go:123] node cpu capacity is 2
	I0717 22:47:57.219819 1390553 node_conditions.go:105] duration metric: took 166.523197ms to run NodePressure ...
	I0717 22:47:57.219830 1390553 start.go:228] waiting for startup goroutines ...
	I0717 22:47:57.317510 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:57.326722 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:57.507293 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:57.597384 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:57.817283 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:57.821611 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:58.007632 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:58.096599 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:58.317975 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:58.322860 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:58.506956 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:58.597583 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:58.817261 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:58.821809 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:59.006690 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:59.097159 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:59.317560 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:59.322323 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:47:59.506675 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:47:59.597278 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:47:59.818295 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:47:59.823586 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:00.017278 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:00.115193 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:00.348396 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:00.355281 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:00.506598 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:00.599242 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:00.818194 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:00.823261 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:01.007414 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:01.097087 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:01.317761 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:01.322978 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:01.506420 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:01.597476 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:01.817451 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:01.821979 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:02.007753 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:02.097755 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:02.318305 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:02.323144 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:02.507119 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:02.596701 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:02.817797 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:02.821941 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:03.010737 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:03.097302 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:03.319761 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:03.323291 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:03.507696 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:03.598537 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:03.822560 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:03.835475 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:04.011757 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:04.097413 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:04.318011 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:04.322303 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:04.506422 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:04.597048 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:04.823602 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:04.827264 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:05.015201 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:05.100148 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:05.318821 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:05.327078 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:05.507762 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:05.597957 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:05.818737 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:05.831288 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:06.007797 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:06.104759 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:06.319189 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:06.333204 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:06.517165 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:06.612148 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:06.818541 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:06.832057 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:07.021585 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:07.098351 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:07.324159 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:07.331118 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:07.508545 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:07.598012 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:07.817863 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:07.823059 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:08.010966 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:08.097412 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:08.318476 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:08.322089 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:08.506718 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:08.597023 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:08.817273 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:08.821865 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:09.010481 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:09.097364 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:09.317486 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:09.322160 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:09.506977 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:09.597749 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:09.817203 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:09.822430 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:10.015740 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:10.097948 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:10.317683 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:10.321946 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:10.506061 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:10.597846 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:10.817568 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:10.822164 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:11.007763 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:11.098307 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:11.319899 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:11.322654 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:11.506784 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:11.597852 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:11.817354 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:11.822203 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:12.008595 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:12.097298 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:12.317497 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:12.321800 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:12.506915 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:12.597348 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:12.818273 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:12.823196 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:13.008531 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:13.097705 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:13.331343 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:13.331584 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:13.506970 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:13.597591 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:13.817605 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:13.822340 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:14.014011 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:14.097088 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:14.337461 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:14.357679 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:14.506861 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:14.596635 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:14.817420 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:14.822390 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:15.010055 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:15.102322 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:15.317604 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:15.322581 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:15.507402 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:15.597240 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:15.817721 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:15.822524 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:16.013477 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:16.097995 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:16.317642 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:16.321978 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:16.506508 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:16.597899 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:16.827634 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:16.830346 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:17.008766 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:17.097176 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:17.318101 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:17.322870 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:17.507443 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:17.598245 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:17.817984 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:17.822864 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:18.007845 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:18.097289 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:18.317990 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:18.323871 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:18.507635 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:18.597823 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:18.818307 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:18.823689 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:19.007541 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:19.097449 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:19.316965 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:19.320991 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:19.506059 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:19.597111 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:19.817265 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:19.821922 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:20.019316 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:20.105303 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:20.318252 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:20.324471 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:20.506777 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:20.597112 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:20.817785 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:20.822526 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:21.008268 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:21.098583 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:21.317263 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:21.322384 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:21.506769 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:21.600677 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:21.817941 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:21.823589 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:22.007920 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:22.098966 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:22.318155 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:22.322872 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:22.507308 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:22.598912 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:22.818663 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:22.821659 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:23.007172 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:23.097802 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:23.317660 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:23.322599 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:23.506639 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:23.597183 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:23.818439 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:23.823152 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:24.015412 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:24.104159 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:24.317797 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:24.322902 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 22:48:24.506249 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:24.597581 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:24.817378 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:24.821915 1390553 kapi.go:107] duration metric: took 35.530762493s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 22:48:25.008355 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:25.098017 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:25.318591 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:25.506506 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:25.602747 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:25.825083 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:26.014783 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:26.097783 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:26.317292 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:26.505967 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:26.598447 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:26.817183 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:27.007961 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:27.096573 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:27.317510 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:27.514910 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:27.597793 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:27.818766 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:28.013760 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:28.104348 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:28.318113 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:28.512106 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:28.597383 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:28.817371 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:29.007979 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:29.096547 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:29.317291 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:29.508298 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:29.596228 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:29.818401 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:30.016888 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:30.101073 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:30.318471 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:30.506251 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:30.597911 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:30.817739 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:31.014042 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:31.098555 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:31.318511 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:31.506522 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:31.597161 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:31.817846 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:32.007830 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:32.100534 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:32.317141 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:32.507010 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:32.597915 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:32.818402 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:33.025533 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:33.097802 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:33.317571 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:33.506963 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:33.597290 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:33.818549 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:34.013309 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:34.097574 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:34.317719 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:34.513034 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:34.596970 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:34.818120 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:35.010284 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:35.097941 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:35.319614 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:35.507428 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:35.597545 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:35.817982 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:36.007733 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:36.098061 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:36.317728 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:36.506426 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:36.599041 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:36.818006 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:37.008045 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:37.097247 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:37.319087 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:37.512353 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:37.597727 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:37.816833 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:38.013029 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:38.097815 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:38.318047 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:38.507066 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:38.599192 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:38.818830 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:39.007553 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:39.097406 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:39.318700 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:39.506857 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:39.597252 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:48:39.818237 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:40.047824 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:40.108510 1390553 kapi.go:107] duration metric: took 48.056594416s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 22:48:40.317541 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:40.506966 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:40.817228 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:41.007424 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:41.317136 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:41.507107 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:41.818202 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:42.009357 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:42.317960 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:42.506632 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:42.823068 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:43.014715 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:43.317476 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:43.506368 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:43.817721 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:44.008841 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:44.318193 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:44.506113 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:44.817639 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:45.021561 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:45.320538 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:45.507649 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:45.818037 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:46.008392 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:46.317248 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:46.506224 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:46.817750 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:47.006846 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:47.317733 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:47.506777 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:47.817680 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:48.008675 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:48.317272 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:48.507003 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:48.818180 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:49.007852 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:49.318306 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:49.506329 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:49.817119 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:50.014469 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:50.317100 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:50.506756 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:50.817314 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:51.014455 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:51.317250 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:51.506140 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:51.817318 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:52.006737 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:52.317568 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:52.506739 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:52.817839 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:53.009153 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:53.317140 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:53.507674 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:53.817425 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:54.011119 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:54.317767 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:54.508543 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:54.818252 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:55.008800 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:55.320833 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:55.506708 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:55.817974 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:56.007329 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:56.317763 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:56.506586 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:56.817245 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:57.007275 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:57.317623 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:57.506972 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:57.817967 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:58.007885 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:58.317371 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:58.507080 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:58.818979 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:59.007629 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:59.317747 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:48:59.507360 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:48:59.817508 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:49:00.052661 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:00.319870 1390553 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:49:00.507363 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:00.817679 1390553 kapi.go:107] duration metric: took 1m11.53111019s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 22:49:01.008973 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:01.506192 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:02.013237 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:02.506434 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:03.012332 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:03.506408 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:04.013023 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:04.506990 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:05.016591 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:05.505904 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:06.007250 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:06.506070 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:07.007164 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:07.506314 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:08.007617 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:08.506220 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:09.006723 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:09.506707 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:10.012191 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:10.505879 1390553 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:49:11.008250 1390553 kapi.go:107] duration metric: took 1m16.520070842s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 22:49:11.010999 1390553 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-534909 cluster.
	I0717 22:49:11.013234 1390553 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 22:49:11.014961 1390553 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 22:49:11.017132 1390553 out.go:177] * Enabled addons: ingress-dns, default-storageclass, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 22:49:11.019046 1390553 addons.go:502] enable addons completed in 1m33.496036629s: enabled=[ingress-dns default-storageclass storage-provisioner cloud-spanner metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 22:49:11.019097 1390553 start.go:233] waiting for cluster config update ...
	I0717 22:49:11.019131 1390553 start.go:242] writing updated cluster config ...
	I0717 22:49:11.019485 1390553 ssh_runner.go:195] Run: rm -f paused
	I0717 22:49:11.085864 1390553 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:49:11.088108 1390553 out.go:177] * Done! kubectl is now configured to use "addons-534909" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 22:50:05 addons-534909 cri-dockerd[1305]: time="2023-07-17T22:50:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af19d069626c111c9687922b1b4722fcc2f485eedeae53fe83e28371612f381e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Jul 17 22:50:05 addons-534909 cri-dockerd[1305]: time="2023-07-17T22:50:05Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jul 17 22:50:12 addons-534909 dockerd[1095]: time="2023-07-17T22:50:12.749404262Z" level=info msg="ignoring event" container=35aa88664218b9289e65dfde55af67769152d6ff00814b55b80df2bb749bd00b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:12 addons-534909 dockerd[1095]: time="2023-07-17T22:50:12.840765547Z" level=info msg="ignoring event" container=af19d069626c111c9687922b1b4722fcc2f485eedeae53fe83e28371612f381e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:13 addons-534909 dockerd[1095]: time="2023-07-17T22:50:13.814210587Z" level=info msg="ignoring event" container=dc48189ee5fe1c2c5a6cb7d9d040bac58deca4f44df8837d439bece4c1d3258b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.657404417Z" level=info msg="ignoring event" container=bb247f23a4ae8f154f617e92417d75f88951ee163d1fe7f28ee0799765b05315 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.685217642Z" level=info msg="ignoring event" container=6c036f90e09ca655fb9339cb1823f9a849bcec73164181f73792a8d4197b3745 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.830405106Z" level=info msg="ignoring event" container=999dc841d86239ae6cb459706601b06d4df284ad4f668661f3122b2c74a9b3eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.830457266Z" level=info msg="ignoring event" container=9c47b4d56af285431aaaaa9a1c06715de67a080c580459a4dfa451acf2bc6185 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.899049926Z" level=info msg="ignoring event" container=2a3c20a23c9b2688ddb90a36c6abd31c9d68568ab52f7b48b90e0b306a964389 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.901776658Z" level=info msg="ignoring event" container=489b33f6925a12f0aa431f44dcd1586bbf16b1f4ad77e4059f3da2774b32b58c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:14 addons-534909 dockerd[1095]: time="2023-07-17T22:50:14.931308266Z" level=info msg="ignoring event" container=96c469f2a4a726af2129c7eb012960346273966bfdd1df9736cfd0695cf9bae5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:15 addons-534909 dockerd[1095]: time="2023-07-17T22:50:15.032988323Z" level=info msg="ignoring event" container=7a3aa3b6d0baadbe26e2485cbd2aa386902237c535846a1ca5863c2db382b73f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:15 addons-534909 dockerd[1095]: time="2023-07-17T22:50:15.242193428Z" level=info msg="ignoring event" container=2f99c39d6a593058d06131e545ea5e975510d15c2abe780eadcb54a27a277bfa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:15 addons-534909 dockerd[1095]: time="2023-07-17T22:50:15.462807773Z" level=info msg="ignoring event" container=b37173201a1c10b5f09d899c5f6e3bf55cb0bf57fb17a1ab499ab8800ebacbc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:15 addons-534909 dockerd[1095]: time="2023-07-17T22:50:15.499393237Z" level=info msg="ignoring event" container=11d6bfcb605509bd0fe1a7b6797b32bea30451d54660c4dd15f5a76958c5ba44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:16 addons-534909 dockerd[1095]: time="2023-07-17T22:50:16.313429796Z" level=info msg="Container failed to exit within 1s of signal 15 - using the force" container=094ba3d0c4b1df1a53f25ee370265bb071dedd24f32b8a673988adc5c87707be
	Jul 17 22:50:16 addons-534909 dockerd[1095]: time="2023-07-17T22:50:16.405142048Z" level=info msg="ignoring event" container=094ba3d0c4b1df1a53f25ee370265bb071dedd24f32b8a673988adc5c87707be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:16 addons-534909 cri-dockerd[1305]: time="2023-07-17T22:50:16Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"ingress-nginx-controller-7799c6795f-zljmp_ingress-nginx\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 22:50:16 addons-534909 dockerd[1095]: time="2023-07-17T22:50:16.565424429Z" level=info msg="ignoring event" container=c6c9dd1181c6a1f791ba317b2867733b5b41e2365d851b40db12dec54da19f99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:17 addons-534909 dockerd[1095]: time="2023-07-17T22:50:17.147746567Z" level=info msg="ignoring event" container=3c21742dfc3c386c3b2fd3a5d1e914ee43a70dada2dc65c81aa8f25b71bcf979 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:21 addons-534909 dockerd[1095]: time="2023-07-17T22:50:21.294906997Z" level=info msg="ignoring event" container=77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:21 addons-534909 dockerd[1095]: time="2023-07-17T22:50:21.297331614Z" level=info msg="ignoring event" container=d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:21 addons-534909 dockerd[1095]: time="2023-07-17T22:50:21.416497927Z" level=info msg="ignoring event" container=d39083ecf02858cee0da1f9f5c90086e99bcc0fa5c3e50de615cc1c767d72796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:50:21 addons-534909 dockerd[1095]: time="2023-07-17T22:50:21.449958003Z" level=info msg="ignoring event" container=8179c2e1c46e6a5d58a5cb5ae11b921819abeffb4f0cebb80204805810c5415c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3c21742dfc3c3       13753a81eccfd                                                                                                                6 seconds ago        Exited              hello-world-app           2                   d111ef488d797       hello-world-app-65bdb79f98-tjnvt
	44174f2879ca9       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                                                33 seconds ago       Running             nginx                     0                   7fa9ec2c3f26d       nginx
	baef623c88786       gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b                          59 seconds ago       Exited              registry-test             0                   9c40a89ed7374       registry-test
	7366e46db6160       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        About a minute ago   Running             headlamp                  0                   b515c5037c3de       headlamp-66f6498c69-dsvzb
	f4eb424a471d3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                  0                   f08e0dada0e44       gcp-auth-58478865f7-j9jpq
	d66614028c7e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   About a minute ago   Exited              patch                     0                   e43ebd77d86d4       ingress-nginx-admission-patch-658td
	ffda514cc40b2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   About a minute ago   Exited              create                    0                   cad00d9fe3660       ingress-nginx-admission-create-25bvv
	b288f844d08a0       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   1                   e62fbbb223501       coredns-5d78c9869d-l79hd
	5bebc8f6b6ef0       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   8bd5c3a11bbb6       storage-provisioner
	0b2ed6d4a2255       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                                     2 minutes ago        Running             kindnet-cni               0                   56f3ee2350c18       kindnet-22cwn
	362ee0bd1d795       97e04611ad434                                                                                                                2 minutes ago        Exited              coredns                   0                   c18b5f832bf6b       coredns-5d78c9869d-l79hd
	e5c4d21a4b45c       fb73e92641fd5                                                                                                                2 minutes ago        Running             kube-proxy                0                   08fef7afcaca9       kube-proxy-hsstj
	08e979923dc75       39dfb036b0986                                                                                                                3 minutes ago        Running             kube-apiserver            0                   be11a8189a5bc       kube-apiserver-addons-534909
	581527d4d75e5       ab3683b584ae5                                                                                                                3 minutes ago        Running             kube-controller-manager   0                   f6db002fe4aab       kube-controller-manager-addons-534909
	40f88f246a93b       bcb9e554eaab6                                                                                                                3 minutes ago        Running             kube-scheduler            0                   2362b6155dd8f       kube-scheduler-addons-534909
	be26d0fff146e       24bc64e911039                                                                                                                3 minutes ago        Running             etcd                      0                   e4ba7528b860a       etcd-addons-534909
	
	* 
	* ==> coredns [362ee0bd1d79] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 5195915916440041626.6049744869098995592. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 5195915916440041626.6049744869098995592. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [b288f844d08a] <==
	* [INFO] 10.244.0.16:48100 - 22 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072526s
	[INFO] 10.244.0.16:48100 - 63385 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079057s
	[INFO] 10.244.0.16:48100 - 59339 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069973s
	[INFO] 10.244.0.16:48100 - 63755 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000114027s
	[INFO] 10.244.0.16:48100 - 25735 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00241894s
	[INFO] 10.244.0.16:48100 - 43695 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006750364s
	[INFO] 10.244.0.16:48100 - 48706 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000091922s
	[INFO] 10.244.0.16:37005 - 34788 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000133432s
	[INFO] 10.244.0.16:55795 - 57761 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000113026s
	[INFO] 10.244.0.16:55795 - 56609 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075733s
	[INFO] 10.244.0.16:37005 - 28860 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000152206s
	[INFO] 10.244.0.16:37005 - 14386 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068242s
	[INFO] 10.244.0.16:55795 - 42307 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089403s
	[INFO] 10.244.0.16:37005 - 50037 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005111s
	[INFO] 10.244.0.16:55795 - 41277 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036677s
	[INFO] 10.244.0.16:37005 - 32474 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031959s
	[INFO] 10.244.0.16:55795 - 2732 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028176s
	[INFO] 10.244.0.16:37005 - 38074 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039549s
	[INFO] 10.244.0.16:55795 - 5788 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035462s
	[INFO] 10.244.0.16:37005 - 13878 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001461521s
	[INFO] 10.244.0.16:55795 - 5561 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001298098s
	[INFO] 10.244.0.16:55795 - 13390 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001629628s
	[INFO] 10.244.0.16:37005 - 14361 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001557242s
	[INFO] 10.244.0.16:55795 - 61637 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000099881s
	[INFO] 10.244.0.16:37005 - 4134 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135164s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-534909
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-534909
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=addons-534909
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_47_26_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-534909
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:47:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-534909
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:50:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:49:59 +0000   Mon, 17 Jul 2023 22:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:49:59 +0000   Mon, 17 Jul 2023 22:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:49:59 +0000   Mon, 17 Jul 2023 22:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:49:59 +0000   Mon, 17 Jul 2023 22:47:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-534909
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 c53b484ace9f44d096894213a56f30c0
	  System UUID:                87c1b092-3b71-42a3-ba85-faba49481b06
	  Boot ID:                    cbdc664b-32f3-4468-95d3-fdbd4fe2a3f0
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-tjnvt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-58478865f7-j9jpq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  headlamp                    headlamp-66f6498c69-dsvzb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 coredns-5d78c9869d-l79hd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m46s
	  kube-system                 etcd-addons-534909                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m58s
	  kube-system                 kindnet-22cwn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m46s
	  kube-system                 kube-apiserver-addons-534909             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 kube-controller-manager-addons-534909    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 kube-proxy-hsstj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-scheduler-addons-534909             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m44s  kube-proxy       
	  Normal  Starting                 2m59s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s  kubelet          Node addons-534909 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s  kubelet          Node addons-534909 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s  kubelet          Node addons-534909 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m58s  kubelet          Node addons-534909 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m58s  kubelet          Node addons-534909 status is now: NodeReady
	  Normal  RegisteredNode           2m47s  node-controller  Node addons-534909 event: Registered Node addons-534909 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001076] FS-Cache: O-key=[8] 'f373ed0000000000'
	[  +0.000700] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000fd6a4fa5
	[  +0.001027] FS-Cache: N-key=[8] 'f373ed0000000000'
	[  +0.002623] FS-Cache: Duplicate cookie detected
	[  +0.000802] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=000000009ea66d98
	[  +0.001117] FS-Cache: O-key=[8] 'f373ed0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001014] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=0000000099715709
	[  +0.001103] FS-Cache: N-key=[8] 'f373ed0000000000'
	[Jul17 21:37] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000c5602f0f
	[  +0.001080] FS-Cache: O-key=[8] 'f273ed0000000000'
	[  +0.000699] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000fd6a4fa5
	[  +0.001030] FS-Cache: N-key=[8] 'f273ed0000000000'
	[  +0.436562] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000930] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=0000000090c83777
	[  +0.001017] FS-Cache: O-key=[8] 'f873ed0000000000'
	[  +0.000708] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000909] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000a89fbb87
	[  +0.001040] FS-Cache: N-key=[8] 'f873ed0000000000'
	
	* 
	* ==> etcd [be26d0fff146] <==
	* {"level":"info","ts":"2023-07-17T22:47:17.134Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T22:47:17.137Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-07-17T22:47:17.138Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:47:17.138Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:47:17.138Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:47:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-17T22:47:17.138Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-17T22:47:18.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T22:47:18.121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T22:47:18.121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-17T22:47:18.121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:47:18.121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T22:47:18.121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-17T22:47:18.121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T22:47:18.125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-534909 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:47:18.125Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:47:18.128Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:47:18.130Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-07-17T22:47:18.131Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:47:18.170Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:47:18.131Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:47:18.131Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:47:18.177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:47:18.228Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:47:18.229Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [f4eb424a471d] <==
	* 2023/07/17 22:49:10 GCP Auth Webhook started!
	2023/07/17 22:49:18 Ready to marshal response ...
	2023/07/17 22:49:18 Ready to write response ...
	2023/07/17 22:49:18 Ready to marshal response ...
	2023/07/17 22:49:18 Ready to write response ...
	2023/07/17 22:49:18 Ready to marshal response ...
	2023/07/17 22:49:18 Ready to write response ...
	2023/07/17 22:49:21 Ready to marshal response ...
	2023/07/17 22:49:21 Ready to write response ...
	2023/07/17 22:49:31 Ready to marshal response ...
	2023/07/17 22:49:31 Ready to write response ...
	2023/07/17 22:49:47 Ready to marshal response ...
	2023/07/17 22:49:47 Ready to write response ...
	2023/07/17 22:49:57 Ready to marshal response ...
	2023/07/17 22:49:57 Ready to write response ...
	2023/07/17 22:50:04 Ready to marshal response ...
	2023/07/17 22:50:04 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:50:23 up  6:32,  0 users,  load average: 3.51, 3.12, 2.45
	Linux addons-534909 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [0b2ed6d4a225] <==
	* I0717 22:48:21.405131       1 main.go:227] handling current node
	I0717 22:48:31.408927       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:48:31.408954       1 main.go:227] handling current node
	I0717 22:48:41.421120       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:48:41.421150       1 main.go:227] handling current node
	I0717 22:48:51.433853       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:48:51.433884       1 main.go:227] handling current node
	I0717 22:49:01.441740       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:49:01.441768       1 main.go:227] handling current node
	I0717 22:49:11.445882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:49:11.445911       1 main.go:227] handling current node
	I0717 22:49:21.467408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:49:21.467456       1 main.go:227] handling current node
	I0717 22:49:31.494430       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:49:31.494460       1 main.go:227] handling current node
	I0717 22:49:41.544521       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:49:41.544550       1 main.go:227] handling current node
	I0717 22:49:51.549606       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:49:51.549635       1 main.go:227] handling current node
	I0717 22:50:01.554876       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:50:01.554907       1 main.go:227] handling current node
	I0717 22:50:11.567862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:50:11.567892       1 main.go:227] handling current node
	I0717 22:50:21.580682       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:50:21.580711       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [08e979923dc7] <==
	* E0717 22:50:14.296480       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 22:50:14.296505       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 22:50:14.296535       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 22:50:14.296556       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 22:50:14.413333       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0717 22:50:20.945195       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:20.945253       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:20.971589       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:20.971851       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:20.989603       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:20.989866       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:21.026280       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:21.026385       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:21.086807       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:21.088931       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:21.089124       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:21.089367       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:21.141534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:21.141674       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:50:21.146138       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:50:21.146193       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 22:50:22.038427       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 22:50:22.142022       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 22:50:22.172725       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [581527d4d75e] <==
	* I0717 22:49:51.559558       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0717 22:49:57.627305       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 22:49:57.691583       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-tjnvt"
	W0717 22:50:00.159035       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:00.159074       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 22:50:04.166004       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0717 22:50:06.643893       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0717 22:50:06.643942       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:50:07.106627       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0717 22:50:07.106676       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:50:14.385178       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0717 22:50:14.534668       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0717 22:50:14.960610       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:14.960646       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 22:50:15.175297       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 22:50:15.214549       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	E0717 22:50:22.040647       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:22.144284       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:22.175023       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:50:23.014402       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:23.014451       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:50:23.071092       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:23.071152       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:50:23.728004       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:50:23.728043       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [e5c4d21a4b45] <==
	* I0717 22:47:39.074609       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 22:47:39.074750       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 22:47:39.074808       1 server_others.go:554] "Using iptables proxy"
	I0717 22:47:39.212317       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:47:39.212357       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 22:47:39.212366       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 22:47:39.212382       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 22:47:39.212446       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:47:39.213591       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:47:39.213617       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:47:39.214439       1 config.go:188] "Starting service config controller"
	I0717 22:47:39.214487       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:47:39.214524       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:47:39.214533       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:47:39.219702       1 config.go:315] "Starting node config controller"
	I0717 22:47:39.219728       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:47:39.318151       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:47:39.318205       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:47:39.322496       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [40f88f246a93] <==
	* W0717 22:47:21.651590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:47:21.651613       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 22:47:21.651694       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:47:21.651724       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:47:21.651782       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:47:21.651802       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 22:47:21.654360       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:47:21.654399       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 22:47:21.654474       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:47:21.654499       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:47:21.654568       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:47:21.654584       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:47:21.654637       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:47:21.654646       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:47:22.485628       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:47:22.485823       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 22:47:22.554591       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:47:22.554746       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 22:47:22.574677       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:47:22.574900       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:47:22.626449       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:47:22.626490       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:47:22.687591       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:47:22.687627       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0717 22:47:24.728026       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.014186    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=1e854197-f366-41a8-b3c4-66e7cea140fb path="/var/lib/kubelet/pods/1e854197-f366-41a8-b3c4-66e7cea140fb/volumes"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.014846    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4397019c-7816-482d-930d-0094b161c178 path="/var/lib/kubelet/pods/4397019c-7816-482d-930d-0094b161c178/volumes"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.015210    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=687901d1-bfd1-47ea-bbbe-315a7ea41e55 path="/var/lib/kubelet/pods/687901d1-bfd1-47ea-bbbe-315a7ea41e55/volumes"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.018448    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=76295516-5455-4af3-a777-15ff62744f05 path="/var/lib/kubelet/pods/76295516-5455-4af3-a777-15ff62744f05/volumes"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.018934    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=bcd6adb8-7e5f-48ae-97a1-1819c87eefa1 path="/var/lib/kubelet/pods/bcd6adb8-7e5f-48ae-97a1-1819c87eefa1/volumes"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.647003    2343 scope.go:115] "RemoveContainer" containerID="094ba3d0c4b1df1a53f25ee370265bb071dedd24f32b8a673988adc5c87707be"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.676256    2343 scope.go:115] "RemoveContainer" containerID="5aedf282a56910e21feca5d89e22f52c2854e504d47be386bd611263797f83a0"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: I0717 22:50:17.676737    2343 scope.go:115] "RemoveContainer" containerID="3c21742dfc3c386c3b2fd3a5d1e914ee43a70dada2dc65c81aa8f25b71bcf979"
	Jul 17 22:50:17 addons-534909 kubelet[2343]: E0717 22:50:17.677191    2343 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-tjnvt_default(23a16891-11fd-4597-ac55-412d965644ae)\"" pod="default/hello-world-app-65bdb79f98-tjnvt" podUID=23a16891-11fd-4597-ac55-412d965644ae
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.566399    2343 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75zhf\" (UniqueName: \"kubernetes.io/projected/f74779a7-7c69-4062-8655-2b24c056e38c-kube-api-access-75zhf\") pod \"f74779a7-7c69-4062-8655-2b24c056e38c\" (UID: \"f74779a7-7c69-4062-8655-2b24c056e38c\") "
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.566452    2343 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lxzh\" (UniqueName: \"kubernetes.io/projected/ee67f948-bb93-4ba0-b861-64634e78907a-kube-api-access-8lxzh\") pod \"ee67f948-bb93-4ba0-b861-64634e78907a\" (UID: \"ee67f948-bb93-4ba0-b861-64634e78907a\") "
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.568997    2343 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee67f948-bb93-4ba0-b861-64634e78907a-kube-api-access-8lxzh" (OuterVolumeSpecName: "kube-api-access-8lxzh") pod "ee67f948-bb93-4ba0-b861-64634e78907a" (UID: "ee67f948-bb93-4ba0-b861-64634e78907a"). InnerVolumeSpecName "kube-api-access-8lxzh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.569823    2343 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74779a7-7c69-4062-8655-2b24c056e38c-kube-api-access-75zhf" (OuterVolumeSpecName: "kube-api-access-75zhf") pod "f74779a7-7c69-4062-8655-2b24c056e38c" (UID: "f74779a7-7c69-4062-8655-2b24c056e38c"). InnerVolumeSpecName "kube-api-access-75zhf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.667371    2343 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-75zhf\" (UniqueName: \"kubernetes.io/projected/f74779a7-7c69-4062-8655-2b24c056e38c-kube-api-access-75zhf\") on node \"addons-534909\" DevicePath \"\""
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.667415    2343 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8lxzh\" (UniqueName: \"kubernetes.io/projected/ee67f948-bb93-4ba0-b861-64634e78907a-kube-api-access-8lxzh\") on node \"addons-534909\" DevicePath \"\""
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.768072    2343 scope.go:115] "RemoveContainer" containerID="77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.803802    2343 scope.go:115] "RemoveContainer" containerID="77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: E0717 22:50:21.804695    2343 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca" containerID="77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.804756    2343 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca} err="failed to get container status \"77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca\": rpc error: code = Unknown desc = Error response from daemon: No such container: 77432dcd61dfe1f52e531c69bee20ae1dd94fc022ec066ed2271724ec4218fca"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.804770    2343 scope.go:115] "RemoveContainer" containerID="d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.832097    2343 scope.go:115] "RemoveContainer" containerID="d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: E0717 22:50:21.833012    2343 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8" containerID="d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8"
	Jul 17 22:50:21 addons-534909 kubelet[2343]: I0717 22:50:21.833064    2343 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8} err="failed to get container status \"d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8\": rpc error: code = Unknown desc = Error response from daemon: No such container: d441cc525bd68aeda74ba42098eb3d8db3b0e1d9ece12cdfbd54e6301b1920a8"
	Jul 17 22:50:23 addons-534909 kubelet[2343]: I0717 22:50:23.019765    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ee67f948-bb93-4ba0-b861-64634e78907a path="/var/lib/kubelet/pods/ee67f948-bb93-4ba0-b861-64634e78907a/volumes"
	Jul 17 22:50:23 addons-534909 kubelet[2343]: I0717 22:50:23.020164    2343 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f74779a7-7c69-4062-8655-2b24c056e38c path="/var/lib/kubelet/pods/f74779a7-7c69-4062-8655-2b24c056e38c/volumes"
	
	* 
	* ==> storage-provisioner [5bebc8f6b6ef] <==
	* I0717 22:47:46.438683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:47:47.791201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:47:47.791296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:47:48.586045       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:47:48.715949       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-534909_c06119f0-0dc5-44a2-b415-01d12045d361!
	I0717 22:47:49.248997       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92264791-d38e-452e-98ee-31b4d1dcbd35", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-534909_c06119f0-0dc5-44a2-b415-01d12045d361 became leader
	I0717 22:47:49.955440       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-534909_c06119f0-0dc5-44a2-b415-01d12045d361!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-534909 -n addons-534909
helpers_test.go:261: (dbg) Run:  kubectl --context addons-534909 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.88s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (240.238745ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (59.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-539717 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-539717 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.29298816s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-539717 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-539717 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2a63d1f5-1594-41f8-b44c-675863d68925] Pending
helpers_test.go:344: "nginx" [2a63d1f5-1594-41f8-b44c-675863d68925] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2a63d1f5-1594-41f8-b44c-675863d68925] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.025948277s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-539717 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.008209183s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons disable ingress-dns --alsologtostderr -v=1: (10.209319586s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons disable ingress --alsologtostderr -v=1: (7.503268532s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-539717
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-539717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5032d8b83719a93e9703efca79b9711b0527ecc8b7728d0a9877686530b0fcce",
	        "Created": "2023-07-17T22:56:31.411834119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1437456,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:56:31.742218868Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/5032d8b83719a93e9703efca79b9711b0527ecc8b7728d0a9877686530b0fcce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5032d8b83719a93e9703efca79b9711b0527ecc8b7728d0a9877686530b0fcce/hostname",
	        "HostsPath": "/var/lib/docker/containers/5032d8b83719a93e9703efca79b9711b0527ecc8b7728d0a9877686530b0fcce/hosts",
	        "LogPath": "/var/lib/docker/containers/5032d8b83719a93e9703efca79b9711b0527ecc8b7728d0a9877686530b0fcce/5032d8b83719a93e9703efca79b9711b0527ecc8b7728d0a9877686530b0fcce-json.log",
	        "Name": "/ingress-addon-legacy-539717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-539717:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-539717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/00b749d13a93438f3482fe393ad193d23020ce0218b72be69274547800791cb2-init/diff:/var/lib/docker/overlay2/fdc677bc34c4dd81c3e2a60b8c6dfef55cbcd01465515913bdab326c77319b46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/00b749d13a93438f3482fe393ad193d23020ce0218b72be69274547800791cb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/00b749d13a93438f3482fe393ad193d23020ce0218b72be69274547800791cb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/00b749d13a93438f3482fe393ad193d23020ce0218b72be69274547800791cb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-539717",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-539717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-539717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-539717",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-539717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76e73f80f1d428ec18a83de764e8f7656c16f2dbaf848a7bbb8fc5f602dfcde9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34346"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34345"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34342"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34344"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34343"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/76e73f80f1d4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-539717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5032d8b83719",
	                        "ingress-addon-legacy-539717"
	                    ],
	                    "NetworkID": "dec92b49ce0d8ec85fe55d2d1147d0196dd825ac126cf8ed0f93d0a7deaf4104",
	                    "EndpointID": "4095a4a1315bbaa95ed3de06831c7596dc098d0f2b1e44f62fbb6507ee63d25b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-539717 -n ingress-addon-legacy-539717
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-539717 logs -n 25: (1.04760307s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-034372                     | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-034372 ssh pgrep              | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-034372 image build -t         | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | localhost/my-image:functional-034372     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-034372 image ls               | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	| image          | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-034372                        | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-034372                     | functional-034372           | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:55 UTC |
	| start          | -p image-405790                          | image-405790                | jenkins | v1.31.0 | 17 Jul 23 22:55 UTC | 17 Jul 23 22:56 UTC |
	|                | --driver=docker                          |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-405790                | jenkins | v1.31.0 | 17 Jul 23 22:56 UTC | 17 Jul 23 22:56 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-405790                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-405790                | jenkins | v1.31.0 | 17 Jul 23 22:56 UTC | 17 Jul 23 22:56 UTC |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-405790                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-405790                | jenkins | v1.31.0 | 17 Jul 23 22:56 UTC | 17 Jul 23 22:56 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-405790                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-405790                | jenkins | v1.31.0 | 17 Jul 23 22:56 UTC | 17 Jul 23 22:56 UTC |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-405790                          |                             |         |         |                     |                     |
	| delete         | -p image-405790                          | image-405790                | jenkins | v1.31.0 | 17 Jul 23 22:56 UTC | 17 Jul 23 22:56 UTC |
	| start          | -p ingress-addon-legacy-539717           | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:56 UTC | 17 Jul 23 22:57 UTC |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                     |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-539717              | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:57 UTC | 17 Jul 23 22:58 UTC |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-539717              | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:58 UTC | 17 Jul 23 22:58 UTC |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-539717              | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:58 UTC | 17 Jul 23 22:58 UTC |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-539717 ip           | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:58 UTC | 17 Jul 23 22:58 UTC |
	| addons         | ingress-addon-legacy-539717              | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:58 UTC | 17 Jul 23 22:59 UTC |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-539717              | ingress-addon-legacy-539717 | jenkins | v1.31.0 | 17 Jul 23 22:59 UTC | 17 Jul 23 22:59 UTC |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:56:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:56:14.929668 1436995 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:56:14.929877 1436995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:56:14.929888 1436995 out.go:309] Setting ErrFile to fd 2...
	I0717 22:56:14.929894 1436995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:56:14.930156 1436995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	I0717 22:56:14.930567 1436995 out.go:303] Setting JSON to false
	I0717 22:56:14.931579 1436995 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23922,"bootTime":1689610653,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 22:56:14.931647 1436995 start.go:138] virtualization:  
	I0717 22:56:14.935563 1436995 out.go:177] * [ingress-addon-legacy-539717] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 22:56:14.937275 1436995 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:56:14.937440 1436995 notify.go:220] Checking for updates...
	I0717 22:56:14.941268 1436995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:56:14.943282 1436995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:56:14.945110 1436995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	I0717 22:56:14.946922 1436995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 22:56:14.948640 1436995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:56:14.950840 1436995 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:56:14.974645 1436995 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:56:14.974740 1436995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:56:15.132535 1436995 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 22:56:15.091296683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:56:15.132648 1436995 docker.go:294] overlay module found
	I0717 22:56:15.145457 1436995 out.go:177] * Using the docker driver based on user configuration
	I0717 22:56:15.154160 1436995 start.go:298] selected driver: docker
	I0717 22:56:15.154192 1436995 start.go:880] validating driver "docker" against <nil>
	I0717 22:56:15.154209 1436995 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:56:15.154965 1436995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:56:15.230083 1436995 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 22:56:15.219685132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:56:15.230261 1436995 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:56:15.230529 1436995 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:56:15.233001 1436995 out.go:177] * Using Docker driver with root privileges
	I0717 22:56:15.235370 1436995 cni.go:84] Creating CNI manager for ""
	I0717 22:56:15.235401 1436995 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 22:56:15.235412 1436995 start_flags.go:319] config:
	{Name:ingress-addon-legacy-539717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-539717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:56:15.239432 1436995 out.go:177] * Starting control plane node ingress-addon-legacy-539717 in cluster ingress-addon-legacy-539717
	I0717 22:56:15.241538 1436995 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 22:56:15.243531 1436995 out.go:177] * Pulling base image ...
	I0717 22:56:15.245631 1436995 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 22:56:15.245656 1436995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:56:15.268041 1436995 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:56:15.268071 1436995 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 22:56:15.340701 1436995 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0717 22:56:15.340726 1436995 cache.go:57] Caching tarball of preloaded images
	I0717 22:56:15.340928 1436995 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 22:56:15.343429 1436995 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 22:56:15.345564 1436995 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:56:15.465558 1436995 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0717 22:56:24.174920 1436995 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:56:24.175029 1436995 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:56:25.219948 1436995 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0717 22:56:25.220323 1436995 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/config.json ...
	I0717 22:56:25.220361 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/config.json: {Name:mk266590ce07ee5ddda9722369fce69820b6f9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:25.220559 1436995 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:56:25.220607 1436995 start.go:365] acquiring machines lock for ingress-addon-legacy-539717: {Name:mk7b77a38859cc98a42ebff2b5959f3d9f48fa3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:56:25.220674 1436995 start.go:369] acquired machines lock for "ingress-addon-legacy-539717" in 50.142µs
	I0717 22:56:25.220697 1436995 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-539717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-539717 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 22:56:25.220772 1436995 start.go:125] createHost starting for "" (driver="docker")
	I0717 22:56:25.223314 1436995 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 22:56:25.223611 1436995 start.go:159] libmachine.API.Create for "ingress-addon-legacy-539717" (driver="docker")
	I0717 22:56:25.223648 1436995 client.go:168] LocalClient.Create starting
	I0717 22:56:25.223722 1436995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem
	I0717 22:56:25.223761 1436995 main.go:141] libmachine: Decoding PEM data...
	I0717 22:56:25.223783 1436995 main.go:141] libmachine: Parsing certificate...
	I0717 22:56:25.223845 1436995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem
	I0717 22:56:25.223869 1436995 main.go:141] libmachine: Decoding PEM data...
	I0717 22:56:25.223883 1436995 main.go:141] libmachine: Parsing certificate...
	I0717 22:56:25.224270 1436995 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-539717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 22:56:25.246204 1436995 cli_runner.go:211] docker network inspect ingress-addon-legacy-539717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 22:56:25.246291 1436995 network_create.go:281] running [docker network inspect ingress-addon-legacy-539717] to gather additional debugging logs...
	I0717 22:56:25.246318 1436995 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-539717
	W0717 22:56:25.263838 1436995 cli_runner.go:211] docker network inspect ingress-addon-legacy-539717 returned with exit code 1
	I0717 22:56:25.263871 1436995 network_create.go:284] error running [docker network inspect ingress-addon-legacy-539717]: docker network inspect ingress-addon-legacy-539717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-539717 not found
	I0717 22:56:25.263887 1436995 network_create.go:286] output of [docker network inspect ingress-addon-legacy-539717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-539717 not found
	
	** /stderr **
	I0717 22:56:25.263950 1436995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:56:25.282183 1436995 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000fac990}
	I0717 22:56:25.282225 1436995 network_create.go:123] attempt to create docker network ingress-addon-legacy-539717 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 22:56:25.282285 1436995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-539717 ingress-addon-legacy-539717
	I0717 22:56:25.357804 1436995 network_create.go:107] docker network ingress-addon-legacy-539717 192.168.49.0/24 created
	I0717 22:56:25.357836 1436995 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-539717" container
	I0717 22:56:25.357918 1436995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:56:25.375027 1436995 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-539717 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-539717 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:56:25.393531 1436995 oci.go:103] Successfully created a docker volume ingress-addon-legacy-539717
	I0717 22:56:25.393622 1436995 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-539717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-539717 --entrypoint /usr/bin/test -v ingress-addon-legacy-539717:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 22:56:26.731892 1436995 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-539717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-539717 --entrypoint /usr/bin/test -v ingress-addon-legacy-539717:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.338227863s)
	I0717 22:56:26.731922 1436995 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-539717
	I0717 22:56:26.731941 1436995 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 22:56:26.731961 1436995 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:56:26.732040 1436995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-539717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:56:31.328209 1436995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-539717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.5961256s)
	I0717 22:56:31.328241 1436995 kic.go:199] duration metric: took 4.596277 seconds to extract preloaded images to volume
	W0717 22:56:31.328392 1436995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:56:31.328517 1436995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:56:31.395721 1436995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-539717 --name ingress-addon-legacy-539717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-539717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-539717 --network ingress-addon-legacy-539717 --ip 192.168.49.2 --volume ingress-addon-legacy-539717:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:56:31.750053 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Running}}
	I0717 22:56:31.770176 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Status}}
	I0717 22:56:31.795245 1436995 cli_runner.go:164] Run: docker exec ingress-addon-legacy-539717 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:56:31.867616 1436995 oci.go:144] the created container "ingress-addon-legacy-539717" has a running status.
	I0717 22:56:31.867642 1436995 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa...
	I0717 22:56:32.811423 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 22:56:32.811474 1436995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:56:32.834739 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Status}}
	I0717 22:56:32.858460 1436995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:56:32.858483 1436995 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-539717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:56:32.934248 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Status}}
	I0717 22:56:32.971107 1436995 machine.go:88] provisioning docker machine ...
	I0717 22:56:32.971136 1436995 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-539717"
	I0717 22:56:32.971203 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:33.002371 1436995 main.go:141] libmachine: Using SSH client type: native
	I0717 22:56:33.002890 1436995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34346 <nil> <nil>}
	I0717 22:56:33.002906 1436995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-539717 && echo "ingress-addon-legacy-539717" | sudo tee /etc/hostname
	I0717 22:56:33.164486 1436995 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-539717
	
	I0717 22:56:33.164594 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:33.199340 1436995 main.go:141] libmachine: Using SSH client type: native
	I0717 22:56:33.199787 1436995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34346 <nil> <nil>}
	I0717 22:56:33.199807 1436995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-539717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-539717/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-539717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:56:33.330156 1436995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:56:33.330185 1436995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1384661/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1384661/.minikube}
	I0717 22:56:33.330207 1436995 ubuntu.go:177] setting up certificates
	I0717 22:56:33.330216 1436995 provision.go:83] configureAuth start
	I0717 22:56:33.330275 1436995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-539717
	I0717 22:56:33.348788 1436995 provision.go:138] copyHostCerts
	I0717 22:56:33.348828 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.pem
	I0717 22:56:33.349027 1436995 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.pem, removing ...
	I0717 22:56:33.349043 1436995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.pem
	I0717 22:56:33.349123 1436995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.pem (1078 bytes)
	I0717 22:56:33.349208 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-1384661/.minikube/cert.pem
	I0717 22:56:33.349230 1436995 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1384661/.minikube/cert.pem, removing ...
	I0717 22:56:33.349235 1436995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1384661/.minikube/cert.pem
	I0717 22:56:33.349267 1436995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1384661/.minikube/cert.pem (1123 bytes)
	I0717 22:56:33.349315 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-1384661/.minikube/key.pem
	I0717 22:56:33.349338 1436995 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1384661/.minikube/key.pem, removing ...
	I0717 22:56:33.349346 1436995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1384661/.minikube/key.pem
	I0717 22:56:33.349378 1436995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1384661/.minikube/key.pem (1679 bytes)
	I0717 22:56:33.349428 1436995 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-539717 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-539717]
	I0717 22:56:33.901556 1436995 provision.go:172] copyRemoteCerts
	I0717 22:56:33.901654 1436995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:56:33.901716 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:33.924312 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:56:34.020827 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:56:34.020957 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:56:34.051725 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:56:34.051790 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 22:56:34.080547 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:56:34.080609 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:56:34.108919 1436995 provision.go:86] duration metric: configureAuth took 778.690257ms
	I0717 22:56:34.108946 1436995 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:56:34.109152 1436995 config.go:182] Loaded profile config "ingress-addon-legacy-539717": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 22:56:34.109210 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:34.126606 1436995 main.go:141] libmachine: Using SSH client type: native
	I0717 22:56:34.127055 1436995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34346 <nil> <nil>}
	I0717 22:56:34.127073 1436995 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 22:56:34.258485 1436995 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 22:56:34.258508 1436995 ubuntu.go:71] root file system type: overlay
	I0717 22:56:34.258639 1436995 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 22:56:34.258716 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:34.277587 1436995 main.go:141] libmachine: Using SSH client type: native
	I0717 22:56:34.278035 1436995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34346 <nil> <nil>}
	I0717 22:56:34.278121 1436995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 22:56:34.420310 1436995 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 22:56:34.420416 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:34.439064 1436995 main.go:141] libmachine: Using SSH client type: native
	I0717 22:56:34.439516 1436995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34346 <nil> <nil>}
	I0717 22:56:34.439536 1436995 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 22:56:35.299046 1436995 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:51:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:56:34.416503182 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 22:56:35.299078 1436995 machine.go:91] provisioned docker machine in 2.327951614s
	I0717 22:56:35.299088 1436995 client.go:171] LocalClient.Create took 10.075431134s
	I0717 22:56:35.299106 1436995 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-539717" took 10.075494519s
	I0717 22:56:35.299118 1436995 start.go:300] post-start starting for "ingress-addon-legacy-539717" (driver="docker")
	I0717 22:56:35.299127 1436995 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:56:35.299206 1436995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:56:35.299254 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:35.317373 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:56:35.411999 1436995 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:56:35.416358 1436995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:56:35.416402 1436995 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:56:35.416414 1436995 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:56:35.416428 1436995 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 22:56:35.416442 1436995 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1384661/.minikube/addons for local assets ...
	I0717 22:56:35.416513 1436995 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1384661/.minikube/files for local assets ...
	I0717 22:56:35.416592 1436995 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/13900472.pem -> 13900472.pem in /etc/ssl/certs
	I0717 22:56:35.416606 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/13900472.pem -> /etc/ssl/certs/13900472.pem
	I0717 22:56:35.416710 1436995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:56:35.427341 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/13900472.pem --> /etc/ssl/certs/13900472.pem (1708 bytes)
	I0717 22:56:35.457310 1436995 start.go:303] post-start completed in 158.175896ms
	I0717 22:56:35.457765 1436995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-539717
	I0717 22:56:35.475292 1436995 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/config.json ...
	I0717 22:56:35.475585 1436995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:56:35.475632 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:35.493577 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:56:35.584007 1436995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:56:35.589884 1436995 start.go:128] duration metric: createHost completed in 10.369097756s
	I0717 22:56:35.589910 1436995 start.go:83] releasing machines lock for "ingress-addon-legacy-539717", held for 10.369225182s
	I0717 22:56:35.589983 1436995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-539717
	I0717 22:56:35.610669 1436995 ssh_runner.go:195] Run: cat /version.json
	I0717 22:56:35.610728 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:35.610974 1436995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:56:35.611034 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:56:35.633195 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:56:35.639715 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:56:35.861813 1436995 ssh_runner.go:195] Run: systemctl --version
	I0717 22:56:35.867728 1436995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:56:35.874011 1436995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 22:56:35.908194 1436995 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:56:35.908286 1436995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 22:56:35.929483 1436995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 22:56:35.949772 1436995 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:56:35.949796 1436995 start.go:466] detecting cgroup driver to use...
	I0717 22:56:35.949828 1436995 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:56:35.949960 1436995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:56:35.969626 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0717 22:56:35.981740 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 22:56:35.993955 1436995 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 22:56:35.994023 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 22:56:36.007388 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:56:36.021673 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 22:56:36.035008 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 22:56:36.048407 1436995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:56:36.061253 1436995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 22:56:36.074743 1436995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:56:36.086786 1436995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:56:36.098155 1436995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:56:36.192665 1436995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 22:56:36.286284 1436995 start.go:466] detecting cgroup driver to use...
	I0717 22:56:36.286368 1436995 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:56:36.286455 1436995 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 22:56:36.308813 1436995 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 22:56:36.308963 1436995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 22:56:36.327250 1436995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:56:36.348789 1436995 ssh_runner.go:195] Run: which cri-dockerd
	I0717 22:56:36.354791 1436995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 22:56:36.367531 1436995 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 22:56:36.392481 1436995 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 22:56:36.508282 1436995 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 22:56:36.622359 1436995 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 22:56:36.622430 1436995 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 22:56:36.647157 1436995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:56:36.747337 1436995 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 22:56:37.042307 1436995 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 22:56:37.075081 1436995 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 22:56:37.106578 1436995 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0717 22:56:37.106685 1436995 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-539717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:56:37.123815 1436995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 22:56:37.128430 1436995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:56:37.141788 1436995 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 22:56:37.141856 1436995 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 22:56:37.163539 1436995 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0717 22:56:37.163562 1436995 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0717 22:56:37.163642 1436995 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 22:56:37.175065 1436995 ssh_runner.go:195] Run: which lz4
	I0717 22:56:37.179775 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0717 22:56:37.179869 1436995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:56:37.184314 1436995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:56:37.184350 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0717 22:56:39.423680 1436995 docker.go:600] Took 2.243829 seconds to copy over tarball
	I0717 22:56:39.423808 1436995 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:56:41.938584 1436995 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.514721641s)
	I0717 22:56:41.938608 1436995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:56:42.140087 1436995 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 22:56:42.155206 1436995 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0717 22:56:42.185794 1436995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:56:42.296817 1436995 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 22:56:43.710737 1436995 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.41388837s)
	I0717 22:56:43.710826 1436995 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 22:56:43.735340 1436995 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0717 22:56:43.735359 1436995 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0717 22:56:43.735379 1436995 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:56:43.737106 1436995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:56:43.737251 1436995 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 22:56:43.737356 1436995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:43.737490 1436995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 22:56:43.737555 1436995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:56:43.737616 1436995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:56:43.737788 1436995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:56:43.737848 1436995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:56:43.737942 1436995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:43.739256 1436995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:56:43.739708 1436995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:56:43.739902 1436995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 22:56:43.740097 1436995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:56:43.740555 1436995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 22:56:43.740918 1436995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:56:43.741280 1436995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:56:44.165181 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0717 22:56:44.170481 1436995 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.170738 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0717 22:56:44.178306 1436995 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.178915 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0717 22:56:44.180421 1436995 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.180629 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0717 22:56:44.183466 1436995 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.183697 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 22:56:44.195413 1436995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0717 22:56:44.195510 1436995 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0717 22:56:44.195595 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0717 22:56:44.209469 1436995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0717 22:56:44.209566 1436995 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:56:44.209646 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	W0717 22:56:44.211842 1436995 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.212123 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 22:56:44.238334 1436995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0717 22:56:44.238421 1436995 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:56:44.238500 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	W0717 22:56:44.258063 1436995 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.258298 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:56:44.281897 1436995 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0717 22:56:44.281950 1436995 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 22:56:44.282001 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0717 22:56:44.282075 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0717 22:56:44.282114 1436995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0717 22:56:44.282140 1436995 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:56:44.282178 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:56:44.291163 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 22:56:44.291264 1436995 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0717 22:56:44.291294 1436995 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:56:44.291345 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0717 22:56:44.325502 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 22:56:44.329150 1436995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0717 22:56:44.329196 1436995 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:56:44.329247 1436995 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:56:44.360229 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0717 22:56:44.360299 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 22:56:44.360410 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0717 22:56:44.368432 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0717 22:56:44.490788 1436995 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 22:56:44.491029 1436995 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:44.512053 1436995 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 22:56:44.512150 1436995 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:44.512239 1436995 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:44.551118 1436995 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 22:56:44.551262 1436995 cache_images.go:92] LoadImages completed in 815.868335ms
	W0717 22:56:44.551346 1436995 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 22:56:44.551440 1436995 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 22:56:44.614383 1436995 cni.go:84] Creating CNI manager for ""
	I0717 22:56:44.614456 1436995 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 22:56:44.614471 1436995 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:56:44.614490 1436995 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-539717 NodeName:ingress-addon-legacy-539717 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 22:56:44.614640 1436995 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-539717"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:56:44.614720 1436995 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-539717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-539717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:56:44.614804 1436995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 22:56:44.625914 1436995 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:56:44.626051 1436995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:56:44.636700 1436995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0717 22:56:44.658059 1436995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 22:56:44.680231 1436995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0717 22:56:44.701657 1436995 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:56:44.706206 1436995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:56:44.719600 1436995 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717 for IP: 192.168.49.2
	I0717 22:56:44.719680 1436995 certs.go:190] acquiring lock for shared ca certs: {Name:mk6fe46c8df27a790849650201176fd556c5399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:44.719861 1436995 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key
	I0717 22:56:44.719922 1436995 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key
	I0717 22:56:44.719973 1436995 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.key
	I0717 22:56:44.719987 1436995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt with IP's: []
	I0717 22:56:45.792565 1436995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt ...
	I0717 22:56:45.792599 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: {Name:mke1842d4c74a9f5310f3729e089d2bf42be10ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:45.792803 1436995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.key ...
	I0717 22:56:45.792817 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.key: {Name:mkd8eaa965284481b9b4a80a267f383cd7671e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:45.792923 1436995 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key.dd3b5fb2
	I0717 22:56:45.792947 1436995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:56:46.444007 1436995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt.dd3b5fb2 ...
	I0717 22:56:46.444042 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt.dd3b5fb2: {Name:mk820e3bbd1307ca3405ba51a4b847a5519fe0e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:46.444231 1436995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key.dd3b5fb2 ...
	I0717 22:56:46.444245 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key.dd3b5fb2: {Name:mkf471ea84d3ece6d0347229d01cbefbb4a2e9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:46.444333 1436995 certs.go:337] copying /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt
	I0717 22:56:46.444430 1436995 certs.go:341] copying /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key
	I0717 22:56:46.444493 1436995 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.key
	I0717 22:56:46.444511 1436995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.crt with IP's: []
	I0717 22:56:46.611665 1436995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.crt ...
	I0717 22:56:46.611694 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.crt: {Name:mk8c3ecd22ef512209393dcd981f819330688571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:46.611876 1436995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.key ...
	I0717 22:56:46.611889 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.key: {Name:mkd935cf92c7c26a0feccda4b95e397a724330ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:46.611976 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 22:56:46.611995 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 22:56:46.612007 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 22:56:46.612023 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 22:56:46.612037 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:56:46.612048 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:56:46.612065 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:56:46.612082 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:56:46.612150 1436995 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/1390047.pem (1338 bytes)
	W0717 22:56:46.612191 1436995 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/1390047_empty.pem, impossibly tiny 0 bytes
	I0717 22:56:46.612205 1436995 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:56:46.612239 1436995 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:56:46.612269 1436995 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:56:46.612296 1436995 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/certs/key.pem (1679 bytes)
	I0717 22:56:46.612347 1436995 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/13900472.pem (1708 bytes)
	I0717 22:56:46.612383 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/13900472.pem -> /usr/share/ca-certificates/13900472.pem
	I0717 22:56:46.612402 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:56:46.612412 1436995 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/1390047.pem -> /usr/share/ca-certificates/1390047.pem
	I0717 22:56:46.612991 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:56:46.642725 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:56:46.671828 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:56:46.702163 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:56:46.731658 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:56:46.760815 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:56:46.790007 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:56:46.819229 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:56:46.848194 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/ssl/certs/13900472.pem --> /usr/share/ca-certificates/13900472.pem (1708 bytes)
	I0717 22:56:46.876915 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:56:46.906039 1436995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1384661/.minikube/certs/1390047.pem --> /usr/share/ca-certificates/1390047.pem (1338 bytes)
	I0717 22:56:46.935519 1436995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:56:46.958018 1436995 ssh_runner.go:195] Run: openssl version
	I0717 22:56:46.965231 1436995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13900472.pem && ln -fs /usr/share/ca-certificates/13900472.pem /etc/ssl/certs/13900472.pem"
	I0717 22:56:46.977387 1436995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13900472.pem
	I0717 22:56:46.982174 1436995 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:51 /usr/share/ca-certificates/13900472.pem
	I0717 22:56:46.982265 1436995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13900472.pem
	I0717 22:56:46.991003 1436995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13900472.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:56:47.003788 1436995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:56:47.016388 1436995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:56:47.021535 1436995 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:56:47.021655 1436995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:56:47.030293 1436995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:56:47.041954 1436995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1390047.pem && ln -fs /usr/share/ca-certificates/1390047.pem /etc/ssl/certs/1390047.pem"
	I0717 22:56:47.054567 1436995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1390047.pem
	I0717 22:56:47.059460 1436995 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:51 /usr/share/ca-certificates/1390047.pem
	I0717 22:56:47.059547 1436995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1390047.pem
	I0717 22:56:47.068215 1436995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1390047.pem /etc/ssl/certs/51391683.0"
	I0717 22:56:47.080103 1436995 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:56:47.085535 1436995 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:56:47.085585 1436995 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-539717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-539717 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:56:47.085708 1436995 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 22:56:47.106244 1436995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:56:47.117467 1436995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:56:47.128426 1436995 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 22:56:47.128492 1436995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:56:47.139330 1436995 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:56:47.139369 1436995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 22:56:47.201698 1436995 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 22:56:47.202053 1436995 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:56:47.431255 1436995 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:56:47.431414 1436995 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 22:56:47.431600 1436995 kubeadm.go:322] DOCKER_VERSION: 24.0.4
	I0717 22:56:47.431646 1436995 kubeadm.go:322] OS: Linux
	I0717 22:56:47.431693 1436995 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 22:56:47.431742 1436995 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 22:56:47.431789 1436995 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 22:56:47.431846 1436995 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 22:56:47.431896 1436995 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 22:56:47.431945 1436995 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 22:56:47.528289 1436995 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:56:47.528400 1436995 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:56:47.528498 1436995 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:56:47.741677 1436995 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:56:47.743077 1436995 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:56:47.743350 1436995 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:56:47.853299 1436995 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:56:47.856954 1436995 out.go:204]   - Generating certificates and keys ...
	I0717 22:56:47.857148 1436995 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:56:47.857261 1436995 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:56:48.213980 1436995 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:56:49.037659 1436995 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:56:49.361288 1436995 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 22:56:50.045908 1436995 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 22:56:50.671911 1436995 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 22:56:50.672277 1436995 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-539717 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 22:56:50.990698 1436995 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 22:56:50.991027 1436995 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-539717 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 22:56:52.795379 1436995 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:56:53.208793 1436995 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:56:53.447213 1436995 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 22:56:53.447420 1436995 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:56:53.790664 1436995 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:56:54.124764 1436995 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:56:54.345743 1436995 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:56:54.758640 1436995 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:56:54.759301 1436995 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:56:54.761647 1436995 out.go:204]   - Booting up control plane ...
	I0717 22:56:54.761746 1436995 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:56:54.777244 1436995 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:56:54.777324 1436995 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:56:54.777401 1436995 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:56:54.778193 1436995 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:57:07.287776 1436995 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.509452 seconds
	I0717 22:57:07.287889 1436995 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:57:07.305787 1436995 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:57:07.828630 1436995 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:57:07.828772 1436995 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-539717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 22:57:08.336682 1436995 kubeadm.go:322] [bootstrap-token] Using token: x6psd7.369doqadpoufvrte
	I0717 22:57:08.339010 1436995 out.go:204]   - Configuring RBAC rules ...
	I0717 22:57:08.339130 1436995 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:57:08.348698 1436995 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:57:08.366431 1436995 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:57:08.369944 1436995 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:57:08.373388 1436995 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:57:08.376751 1436995 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:57:08.386235 1436995 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:57:08.697710 1436995 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:57:08.772393 1436995 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:57:08.772421 1436995 kubeadm.go:322] 
	I0717 22:57:08.772479 1436995 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:57:08.772484 1436995 kubeadm.go:322] 
	I0717 22:57:08.772556 1436995 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:57:08.772561 1436995 kubeadm.go:322] 
	I0717 22:57:08.772584 1436995 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:57:08.772640 1436995 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:57:08.772687 1436995 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:57:08.772692 1436995 kubeadm.go:322] 
	I0717 22:57:08.772740 1436995 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:57:08.772810 1436995 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:57:08.772892 1436995 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:57:08.772897 1436995 kubeadm.go:322] 
	I0717 22:57:08.772976 1436995 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:57:08.773048 1436995 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:57:08.773056 1436995 kubeadm.go:322] 
	I0717 22:57:08.773135 1436995 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x6psd7.369doqadpoufvrte \
	I0717 22:57:08.773234 1436995 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e5d5c8c9181b8ed72220af3cac9466140f0edb69a687eef1ac98c0aceaf43e58 \
	I0717 22:57:08.773259 1436995 kubeadm.go:322]     --control-plane 
	I0717 22:57:08.773263 1436995 kubeadm.go:322] 
	I0717 22:57:08.773342 1436995 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:57:08.773347 1436995 kubeadm.go:322] 
	I0717 22:57:08.773424 1436995 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x6psd7.369doqadpoufvrte \
	I0717 22:57:08.773522 1436995 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e5d5c8c9181b8ed72220af3cac9466140f0edb69a687eef1ac98c0aceaf43e58 
	I0717 22:57:08.775654 1436995 kubeadm.go:322] W0717 22:56:47.200763    1666 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 22:57:08.775832 1436995 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 22:57:08.775953 1436995 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0717 22:57:08.776151 1436995 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 22:57:08.776250 1436995 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:57:08.776366 1436995 kubeadm.go:322] W0717 22:56:54.773016    1666 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 22:57:08.776493 1436995 kubeadm.go:322] W0717 22:56:54.774568    1666 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 22:57:08.776507 1436995 cni.go:84] Creating CNI manager for ""
	I0717 22:57:08.776521 1436995 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 22:57:08.776540 1436995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:57:08.776665 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:08.776733 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=ingress-addon-legacy-539717 minikube.k8s.io/updated_at=2023_07_17T22_57_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:08.816752 1436995 ops.go:34] apiserver oom_adj: -16
	I0717 22:57:09.249176 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:09.874860 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:10.375290 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:10.874954 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:11.374708 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:11.874981 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:12.375024 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:12.874697 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:13.374376 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:13.875021 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:14.374924 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:14.874331 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:15.374486 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:15.874354 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:16.374391 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:16.874906 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:17.375095 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:17.874716 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:18.374433 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:18.875070 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:19.374316 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:19.875039 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:20.374566 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:20.874929 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:21.374789 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:21.874283 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:22.374851 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:22.874625 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.374372 1436995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.615316 1436995 kubeadm.go:1081] duration metric: took 14.83870198s to wait for elevateKubeSystemPrivileges.
	I0717 22:57:23.615350 1436995 kubeadm.go:406] StartCluster complete in 36.529768659s
	I0717 22:57:23.615366 1436995 settings.go:142] acquiring lock: {Name:mkc0c7943c743f0a2c4e51e89031f3fcf4ae225e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:23.615447 1436995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:57:23.616259 1436995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/kubeconfig: {Name:mk792c43221d3b29507daafdb089ed87fdff17a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:23.617046 1436995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:57:23.617335 1436995 config.go:182] Loaded profile config "ingress-addon-legacy-539717": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 22:57:23.617606 1436995 kapi.go:59] client config for ingress-addon-legacy-539717: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:57:23.617794 1436995 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:57:23.618019 1436995 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-539717"
	I0717 22:57:23.618042 1436995 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-539717"
	I0717 22:57:23.618085 1436995 host.go:66] Checking if "ingress-addon-legacy-539717" exists ...
	I0717 22:57:23.618590 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Status}}
	I0717 22:57:23.619052 1436995 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-539717"
	I0717 22:57:23.619072 1436995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-539717"
	I0717 22:57:23.619352 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Status}}
	I0717 22:57:23.619910 1436995 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 22:57:23.685073 1436995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:57:23.686919 1436995 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:23.686944 1436995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:57:23.687005 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:57:23.686644 1436995 kapi.go:59] client config for ingress-addon-legacy-539717: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:57:23.708250 1436995 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-539717"
	I0717 22:57:23.708299 1436995 host.go:66] Checking if "ingress-addon-legacy-539717" exists ...
	I0717 22:57:23.708770 1436995 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-539717 --format={{.State.Status}}
	I0717 22:57:23.720568 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:57:23.740318 1436995 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:23.740348 1436995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:57:23.740423 1436995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-539717
	I0717 22:57:23.775244 1436995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34346 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/ingress-addon-legacy-539717/id_rsa Username:docker}
	I0717 22:57:23.877480 1436995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:57:23.914311 1436995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:24.077789 1436995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:24.233443 1436995 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-539717" context rescaled to 1 replicas
	I0717 22:57:24.233556 1436995 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 22:57:24.237775 1436995 out.go:177] * Verifying Kubernetes components...
	I0717 22:57:24.240611 1436995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:24.886456 1436995 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.008938724s)
	I0717 22:57:24.886490 1436995 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 22:57:24.889398 1436995 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 22:57:24.887767 1436995 kapi.go:59] client config for ingress-addon-legacy-539717: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:57:24.892099 1436995 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-539717" to be "Ready" ...
	I0717 22:57:24.892337 1436995 addons.go:502] enable addons completed in 1.27454215s: enabled=[storage-provisioner default-storageclass]
	I0717 22:57:24.897766 1436995 node_ready.go:49] node "ingress-addon-legacy-539717" has status "Ready":"True"
	I0717 22:57:24.897840 1436995 node_ready.go:38] duration metric: took 5.710212ms waiting for node "ingress-addon-legacy-539717" to be "Ready" ...
	I0717 22:57:24.897865 1436995 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:24.907491 1436995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:26.927023 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:29.426204 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:31.925376 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:33.925883 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:36.425303 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:38.925557 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:40.926117 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.425057 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:45.429148 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:47.925248 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:49.925394 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:52.425373 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:54.426152 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:56.926142 1436995 pod_ready.go:102] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:57.426265 1436995 pod_ready.go:92] pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:57.426292 1436995 pod_ready.go:81] duration metric: took 32.518717045s waiting for pod "coredns-66bff467f8-7xvw4" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.426306 1436995 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.431119 1436995 pod_ready.go:92] pod "etcd-ingress-addon-legacy-539717" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:57.431143 1436995 pod_ready.go:81] duration metric: took 4.827017ms waiting for pod "etcd-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.431154 1436995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.436124 1436995 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-539717" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:57.436148 1436995 pod_ready.go:81] duration metric: took 4.985828ms waiting for pod "kube-apiserver-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.436159 1436995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.441187 1436995 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-539717" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:57.441213 1436995 pod_ready.go:81] duration metric: took 5.046282ms waiting for pod "kube-controller-manager-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.441225 1436995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kcpbq" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.446216 1436995 pod_ready.go:92] pod "kube-proxy-kcpbq" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:57.446241 1436995 pod_ready.go:81] duration metric: took 5.009204ms waiting for pod "kube-proxy-kcpbq" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.446252 1436995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.621598 1436995 request.go:628] Waited for 175.269372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-539717
	I0717 22:57:57.821587 1436995 request.go:628] Waited for 197.341225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-539717
	I0717 22:57:57.824224 1436995 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-539717" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:57.824250 1436995 pod_ready.go:81] duration metric: took 377.990123ms waiting for pod "kube-scheduler-ingress-addon-legacy-539717" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:57.824263 1436995 pod_ready.go:38] duration metric: took 32.926351296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:57.824281 1436995 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:57:57.824352 1436995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:57:57.837477 1436995 api_server.go:72] duration metric: took 33.60383727s to wait for apiserver process to appear ...
	I0717 22:57:57.837537 1436995 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:57:57.837560 1436995 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 22:57:57.846557 1436995 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 22:57:57.847392 1436995 api_server.go:141] control plane version: v1.18.20
	I0717 22:57:57.847418 1436995 api_server.go:131] duration metric: took 9.869295ms to wait for apiserver health ...
	I0717 22:57:57.847427 1436995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:57:58.021856 1436995 request.go:628] Waited for 174.361175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:57:58.027980 1436995 system_pods.go:59] 7 kube-system pods found
	I0717 22:57:58.028061 1436995 system_pods.go:61] "coredns-66bff467f8-7xvw4" [e5f73455-c2ee-44fc-9df4-75ac6087ee89] Running
	I0717 22:57:58.028081 1436995 system_pods.go:61] "etcd-ingress-addon-legacy-539717" [537c52f8-e838-4956-920d-bb9f9857dddb] Running
	I0717 22:57:58.028102 1436995 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-539717" [33c46e58-f043-40fc-b273-374a8964b10d] Running
	I0717 22:57:58.028134 1436995 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-539717" [32c40422-5d45-488d-8046-ea8b1f792fa5] Running
	I0717 22:57:58.028158 1436995 system_pods.go:61] "kube-proxy-kcpbq" [e613d0dc-2ae9-4bf6-91a6-d7ac3b2c236f] Running
	I0717 22:57:58.028180 1436995 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-539717" [2b203476-a3e4-4a7b-87b1-9c8614f596f2] Running
	I0717 22:57:58.028217 1436995 system_pods.go:61] "storage-provisioner" [fa53c506-afa5-4c73-8fcc-fc0fb1911f23] Running
	I0717 22:57:58.028239 1436995 system_pods.go:74] duration metric: took 180.80552ms to wait for pod list to return data ...
	I0717 22:57:58.028261 1436995 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:57:58.221699 1436995 request.go:628] Waited for 193.334904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:57:58.224212 1436995 default_sa.go:45] found service account: "default"
	I0717 22:57:58.224240 1436995 default_sa.go:55] duration metric: took 195.942736ms for default service account to be created ...
	I0717 22:57:58.224251 1436995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:57:58.421663 1436995 request.go:628] Waited for 197.332141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:57:58.427006 1436995 system_pods.go:86] 7 kube-system pods found
	I0717 22:57:58.427040 1436995 system_pods.go:89] "coredns-66bff467f8-7xvw4" [e5f73455-c2ee-44fc-9df4-75ac6087ee89] Running
	I0717 22:57:58.427047 1436995 system_pods.go:89] "etcd-ingress-addon-legacy-539717" [537c52f8-e838-4956-920d-bb9f9857dddb] Running
	I0717 22:57:58.427052 1436995 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-539717" [33c46e58-f043-40fc-b273-374a8964b10d] Running
	I0717 22:57:58.427059 1436995 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-539717" [32c40422-5d45-488d-8046-ea8b1f792fa5] Running
	I0717 22:57:58.427063 1436995 system_pods.go:89] "kube-proxy-kcpbq" [e613d0dc-2ae9-4bf6-91a6-d7ac3b2c236f] Running
	I0717 22:57:58.427069 1436995 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-539717" [2b203476-a3e4-4a7b-87b1-9c8614f596f2] Running
	I0717 22:57:58.427074 1436995 system_pods.go:89] "storage-provisioner" [fa53c506-afa5-4c73-8fcc-fc0fb1911f23] Running
	I0717 22:57:58.427080 1436995 system_pods.go:126] duration metric: took 202.824982ms to wait for k8s-apps to be running ...
	I0717 22:57:58.427091 1436995 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:57:58.427147 1436995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:58.444192 1436995 system_svc.go:56] duration metric: took 17.090275ms WaitForService to wait for kubelet.
	I0717 22:57:58.444221 1436995 kubeadm.go:581] duration metric: took 34.210586497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:57:58.444239 1436995 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:57:58.621674 1436995 request.go:628] Waited for 177.364692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0717 22:57:58.624590 1436995 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 22:57:58.624623 1436995 node_conditions.go:123] node cpu capacity is 2
	I0717 22:57:58.624636 1436995 node_conditions.go:105] duration metric: took 180.391315ms to run NodePressure ...
	I0717 22:57:58.624648 1436995 start.go:228] waiting for startup goroutines ...
	I0717 22:57:58.624654 1436995 start.go:233] waiting for cluster config update ...
	I0717 22:57:58.624664 1436995 start.go:242] writing updated cluster config ...
	I0717 22:57:58.624999 1436995 ssh_runner.go:195] Run: rm -f paused
	I0717 22:57:58.691606 1436995 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 22:57:58.694271 1436995 out.go:177] 
	W0717 22:57:58.696401 1436995 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 22:57:58.698461 1436995 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 22:57:58.700446 1436995 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-539717" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 22:56:43 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:56:43.685565892Z" level=info msg="Daemon has completed initialization"
	Jul 17 22:56:43 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:56:43.708340344Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 22:56:43 ingress-addon-legacy-539717 systemd[1]: Started Docker Application Container Engine.
	Jul 17 22:56:43 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:56:43.710370515Z" level=info msg="API listen on [::]:2376"
	Jul 17 22:58:00 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:00.423305722Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Jul 17 22:58:01 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:01.962897906Z" level=info msg="ignoring event" container=a8269643859fa379ea9aa577eab4947d7c7e0d42ab0baac4e24b1a138289df98 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:01 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:01.987870850Z" level=info msg="ignoring event" container=58f99818b48245994185e975f7fb2cb5ca7b8ff7e7646bd3938b7f2072e11e4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:02 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:02.486355105Z" level=info msg="ignoring event" container=b5ea4ce039ce7ddf6542406e76963a0444182f5fd02e9b19d04aa633fadc9388 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:02 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:02.648266354Z" level=info msg="ignoring event" container=a40800bc0190e1a112d272ac55b2f489e68aefea5d45aad270cd837456dddc95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:03 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:03.485483796Z" level=info msg="ignoring event" container=6d7ff16e68e3def854d4b9213cde133ecccf699437cb1fa63b362625c9b1de33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:03 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:03.965033066Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Jul 17 22:58:10 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:10.756412378Z" level=warning msg="Published ports are discarded when using host network mode"
	Jul 17 22:58:10 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:10.778534249Z" level=warning msg="Published ports are discarded when using host network mode"
	Jul 17 22:58:10 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:10.949896076Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Jul 17 22:58:17 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:17.463130543Z" level=info msg="ignoring event" container=0e6978e40fc021032ff9cc6f365aea7d45724f01d7ba361e4bcc7050a4f12d55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:17 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:17.750757116Z" level=info msg="ignoring event" container=4b7018bf498121ff7ae7ad950a6cc4ad383ba9e143041395f28ca1e573accfef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:31 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:31.446876445Z" level=info msg="ignoring event" container=842a26d6b3adf0ff5985567d7f949aa20e9e6adc472a45fe946fb999a8a6e1aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:37 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:37.573703195Z" level=info msg="ignoring event" container=d5c85d4680aa18f565e583bc56a8f1618ff3416b2e6dfbdf5261af71106d3502 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:37 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:37.915792640Z" level=info msg="ignoring event" container=e87b5106bcdf9dbfccacdcca7450982fd7374e0814be10b278d7f0dc201ab8c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:51 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:51.452599477Z" level=info msg="ignoring event" container=eb6e2ad32d23a9bba7428f950e29680258355a189be06ae3a157fd49d96f0d0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:58:52 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:58:52.358843198Z" level=info msg="ignoring event" container=34e2b0bac4ee7d5f523b7209573f2a9e1c00afb9557d668ab53f948f74e501b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:59:03 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:59:03.204745199Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cfef5e0bba1272812eb2c003df8026fb12f7c430d37bf055a0de9d8947085174
	Jul 17 22:59:03 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:59:03.234651047Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cfef5e0bba1272812eb2c003df8026fb12f7c430d37bf055a0de9d8947085174
	Jul 17 22:59:03 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:59:03.291407803Z" level=info msg="ignoring event" container=cfef5e0bba1272812eb2c003df8026fb12f7c430d37bf055a0de9d8947085174 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:59:03 ingress-addon-legacy-539717 dockerd[1312]: time="2023-07-17T22:59:03.360266374Z" level=info msg="ignoring event" container=153f4ef22e299a8949d45bfd663430cbca0ff9123aedf07a77a389a300e64a52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eb6e2ad32d23a       13753a81eccfd                                                                                                      17 seconds ago       Exited              hello-world-app           2                   e706f169cee6a       hello-world-app-5f5d8b66bb-gdzx7
	259a6ffe00309       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                                      42 seconds ago       Running             nginx                     0                   dd51d3b18c555       nginx
	cfef5e0bba127       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   About a minute ago   Exited              controller                0                   153f4ef22e299       ingress-nginx-controller-7fcf777cb7-88m29
	a40800bc0190e       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   6d7ff16e68e3d       ingress-nginx-admission-patch-s4ddj
	58f99818b4824       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   b5ea4ce039ce7       ingress-nginx-admission-create-l4gsh
	262ebd1999e8e       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   f3dffabaca583       storage-provisioner
	4f1d5a0b10a54       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   cb34b8db670ac       kube-proxy-kcpbq
	47e15c20ec562       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   ad00f1d81428b       coredns-66bff467f8-7xvw4
	c773f5c081a57       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   a15dd22e62ef7       etcd-ingress-addon-legacy-539717
	8b11f580791d4       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   2093c6da330f3       kube-scheduler-ingress-addon-legacy-539717
	fbc7fb784f5e8       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   124be7d41032d       kube-controller-manager-ingress-addon-legacy-539717
	75ec086db59fe       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   5b286e2bb02c6       kube-apiserver-ingress-addon-legacy-539717
	
	* 
	* ==> coredns [47e15c20ec56] <==
	* [INFO] 172.17.0.1:3483 - 33083 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002583s
	[INFO] 172.17.0.1:26274 - 42749 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077161s
	[INFO] 172.17.0.1:48177 - 61316 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026527s
	[INFO] 172.17.0.1:3483 - 7300 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025198s
	[INFO] 172.17.0.1:26274 - 37009 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077293s
	[INFO] 172.17.0.1:48177 - 31960 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003013s
	[INFO] 172.17.0.1:3483 - 32889 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023467s
	[INFO] 172.17.0.1:3483 - 58451 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001647285s
	[INFO] 172.17.0.1:26274 - 21151 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001942786s
	[INFO] 172.17.0.1:48177 - 7093 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001806934s
	[INFO] 172.17.0.1:26274 - 18076 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001588537s
	[INFO] 172.17.0.1:3483 - 25756 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001177466s
	[INFO] 172.17.0.1:26274 - 58368 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128748s
	[INFO] 172.17.0.1:48177 - 39792 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001522845s
	[INFO] 172.17.0.1:3483 - 27492 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053309s
	[INFO] 172.17.0.1:48177 - 42850 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000027397s
	[INFO] 172.17.0.1:63734 - 49499 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086408s
	[INFO] 172.17.0.1:63734 - 61219 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039516s
	[INFO] 172.17.0.1:63734 - 36132 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033174s
	[INFO] 172.17.0.1:63734 - 44968 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033477s
	[INFO] 172.17.0.1:63734 - 39638 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003538s
	[INFO] 172.17.0.1:63734 - 39851 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003209s
	[INFO] 172.17.0.1:63734 - 29336 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001081097s
	[INFO] 172.17.0.1:63734 - 53888 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001647515s
	[INFO] 172.17.0.1:63734 - 12299 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040985s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-539717
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-539717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=ingress-addon-legacy-539717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_57_08_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:57:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-539717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:59:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:58:42 +0000   Mon, 17 Jul 2023 22:56:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:58:42 +0000   Mon, 17 Jul 2023 22:56:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:58:42 +0000   Mon, 17 Jul 2023 22:56:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:58:42 +0000   Mon, 17 Jul 2023 22:57:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-539717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 252651df2ce04d129f5d471930589a47
	  System UUID:                7dc0c23d-261a-4faa-ad13-ece7ef3a05bb
	  Boot ID:                    cbdc664b-32f3-4468-95d3-fdbd4fe2a3f0
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-gdzx7                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 coredns-66bff467f8-7xvw4                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     105s
	  kube-system                 etcd-ingress-addon-legacy-539717                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-apiserver-ingress-addon-legacy-539717             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-539717    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-proxy-kcpbq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-ingress-addon-legacy-539717             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m10s (x5 over 2m10s)  kubelet     Node ingress-addon-legacy-539717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x5 over 2m10s)  kubelet     Node ingress-addon-legacy-539717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x5 over 2m10s)  kubelet     Node ingress-addon-legacy-539717 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s                   kubelet     Node ingress-addon-legacy-539717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                   kubelet     Node ingress-addon-legacy-539717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                   kubelet     Node ingress-addon-legacy-539717 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                106s                   kubelet     Node ingress-addon-legacy-539717 status is now: NodeReady
	  Normal  Starting                 103s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001148] FS-Cache: O-key=[8] 'ae75ed0000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=000000009ea66d98
	[  +0.001038] FS-Cache: N-key=[8] 'ae75ed0000000000'
	[  +0.002518] FS-Cache: Duplicate cookie detected
	[  +0.000686] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000966] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000cba6f97a
	[  +0.001096] FS-Cache: O-key=[8] 'ae75ed0000000000'
	[  +0.000713] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000ead8159c
	[  +0.001030] FS-Cache: N-key=[8] 'ae75ed0000000000'
	[  +3.432161] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000947] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=000000000de2aa3e
	[  +0.001072] FS-Cache: O-key=[8] 'ad75ed0000000000'
	[  +0.000697] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001037] FS-Cache: N-key=[8] 'ad75ed0000000000'
	[  +0.467894] FS-Cache: Duplicate cookie detected
	[  +0.000723] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000993] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000ad7e9ae9
	[  +0.001073] FS-Cache: O-key=[8] 'b375ed0000000000'
	[  +0.000759] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=000000009f9a066e
	[  +0.001012] FS-Cache: N-key=[8] 'b375ed0000000000'
	
	* 
	* ==> etcd [c773f5c081a5] <==
	* raft2023/07/17 22:57:00 INFO: aec36adc501070cc became follower at term 0
	raft2023/07/17 22:57:00 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/17 22:57:00 INFO: aec36adc501070cc became follower at term 1
	raft2023/07/17 22:57:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 22:57:00.571392 W | auth: simple token is not cryptographically signed
	2023-07-17 22:57:00.575049 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-17 22:57:00.576401 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/17 22:57:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 22:57:00.577353 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-07-17 22:57:00.579406 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 22:57:00.579920 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 22:57:00.580074 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/07/17 22:57:01 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/17 22:57:01 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/17 22:57:01 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/17 22:57:01 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/17 22:57:01 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-17 22:57:01.066122 I | etcdserver: published {Name:ingress-addon-legacy-539717 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-17 22:57:01.066524 I | embed: ready to serve client requests
	2023-07-17 22:57:01.068175 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 22:57:01.068410 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 22:57:01.069498 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 22:57:01.069717 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-17 22:57:01.069802 I | embed: ready to serve client requests
	2023-07-17 22:57:01.071076 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  22:59:09 up  6:41,  0 users,  load average: 1.07, 2.20, 2.32
	Linux ingress-addon-legacy-539717 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kube-apiserver [75ec086db59f] <==
	* I0717 22:57:05.352052       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0717 22:57:05.396621       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 22:57:05.613054       1 cache.go:39] Caches are synced for autoregister controller
	I0717 22:57:05.616920       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 22:57:05.617128       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 22:57:05.617260       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 22:57:05.623474       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 22:57:06.311873       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 22:57:06.311905       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 22:57:06.322702       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 22:57:06.326872       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 22:57:06.326892       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 22:57:06.763662       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:57:06.803004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 22:57:06.892428       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 22:57:06.893654       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 22:57:06.905589       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 22:57:07.796972       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 22:57:08.677977       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 22:57:08.751159       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 22:57:12.231840       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 22:57:23.771143       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 22:57:23.955754       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 22:57:59.492462       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 22:58:24.106296       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [fbc7fb784f5e] <==
	* I0717 22:57:23.799544       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"9b699ea0-5c1a-4d4a-8782-e83eee77d709", APIVersion:"apps/v1", ResourceVersion:"332", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7xvw4
	I0717 22:57:23.911206       1 shared_informer.go:230] Caches are synced for expand 
	I0717 22:57:23.940421       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0717 22:57:23.951764       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0717 22:57:23.964101       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"ba0c44f2-ae98-4092-a7d3-754e2d6a2339", APIVersion:"apps/v1", ResourceVersion:"222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-kcpbq
	I0717 22:57:23.990265       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0717 22:57:23.990353       1 shared_informer.go:230] Caches are synced for stateful set 
	I0717 22:57:23.990476       1 shared_informer.go:230] Caches are synced for attach detach 
	E0717 22:57:23.992902       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"ba0c44f2-ae98-4092-a7d3-754e2d6a2339", ResourceVersion:"222", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63825231428, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40014aad40), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x40014aada0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40014aae00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001272380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x40014aae60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40014aaec0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014aaf80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40012fab90), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40014930c8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400016acb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000eb20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001493128)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0717 22:57:24.001151       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 22:57:24.028809       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 22:57:24.028834       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 22:57:24.096288       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0717 22:57:24.098878       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 22:57:25.147331       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0717 22:57:25.147385       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 22:57:59.474087       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f09f05a4-e478-4d9f-a767-1700501e0357", APIVersion:"apps/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 22:57:59.519865       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d9cd4ad0-87c9-4db0-8b38-debf4e7d2154", APIVersion:"batch/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-l4gsh
	I0717 22:57:59.519900       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"706f4175-3b97-4f4b-820f-c8bb8acd8c13", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-88m29
	I0717 22:57:59.597466       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6f881656-7c05-43ba-ba74-9d4e833a0599", APIVersion:"batch/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-s4ddj
	I0717 22:58:02.447455       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d9cd4ad0-87c9-4db0-8b38-debf4e7d2154", APIVersion:"batch/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 22:58:03.455733       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6f881656-7c05-43ba-ba74-9d4e833a0599", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 22:58:34.873747       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"6039d05a-4edb-4a24-b148-1fa071b6f101", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-gdzx7
	I0717 22:58:34.877730       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"399432a1-f1a6-47d4-b20c-bb354a45969e", APIVersion:"apps/v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	E0717 22:59:05.829488       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-kkd4z" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [4f1d5a0b10a5] <==
	* W0717 22:57:25.111121       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 22:57:25.123273       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0717 22:57:25.123390       1 server_others.go:186] Using iptables Proxier.
	I0717 22:57:25.123771       1 server.go:583] Version: v1.18.20
	I0717 22:57:25.127059       1 config.go:315] Starting service config controller
	I0717 22:57:25.127413       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 22:57:25.129372       1 config.go:133] Starting endpoints config controller
	I0717 22:57:25.129570       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 22:57:25.229027       1 shared_informer.go:230] Caches are synced for service config 
	I0717 22:57:25.229977       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8b11f580791d] <==
	* I0717 22:57:00.477761       1 serving.go:313] Generated self-signed cert in-memory
	W0717 22:57:05.478156       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 22:57:05.478250       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 22:57:05.478277       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 22:57:05.478302       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 22:57:05.539716       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 22:57:05.539909       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 22:57:05.543865       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 22:57:05.544126       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:57:05.544235       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:57:05.544346       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 22:57:05.550627       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:57:05.550937       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:57:05.551130       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:57:05.551311       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:57:05.551497       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:57:05.554266       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:57:05.555492       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:57:05.555811       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:05.556230       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:57:05.556961       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:57:05.557796       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:05.558003       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:57:06.419712       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0717 22:57:07.044415       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Jul 17 22:58:46 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:46.306088    2857 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 842a26d6b3adf0ff5985567d7f949aa20e9e6adc472a45fe946fb999a8a6e1aa
	Jul 17 22:58:46 ingress-addon-legacy-539717 kubelet[2857]: E0717 22:58:46.306693    2857 pod_workers.go:191] Error syncing pod 557d4c78-00ff-4718-a647-066a4daed5ee ("kube-ingress-dns-minikube_kube-system(557d4c78-00ff-4718-a647-066a4daed5ee)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(557d4c78-00ff-4718-a647-066a4daed5ee)"
	Jul 17 22:58:50 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:50.775389    2857 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-pvp6k" (UniqueName: "kubernetes.io/secret/557d4c78-00ff-4718-a647-066a4daed5ee-minikube-ingress-dns-token-pvp6k") pod "557d4c78-00ff-4718-a647-066a4daed5ee" (UID: "557d4c78-00ff-4718-a647-066a4daed5ee")
	Jul 17 22:58:50 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:50.780104    2857 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557d4c78-00ff-4718-a647-066a4daed5ee-minikube-ingress-dns-token-pvp6k" (OuterVolumeSpecName: "minikube-ingress-dns-token-pvp6k") pod "557d4c78-00ff-4718-a647-066a4daed5ee" (UID: "557d4c78-00ff-4718-a647-066a4daed5ee"). InnerVolumeSpecName "minikube-ingress-dns-token-pvp6k". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:58:50 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:50.875762    2857 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-pvp6k" (UniqueName: "kubernetes.io/secret/557d4c78-00ff-4718-a647-066a4daed5ee-minikube-ingress-dns-token-pvp6k") on node "ingress-addon-legacy-539717" DevicePath ""
	Jul 17 22:58:51 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:51.306048    2857 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e87b5106bcdf9dbfccacdcca7450982fd7374e0814be10b278d7f0dc201ab8c9
	Jul 17 22:58:51 ingress-addon-legacy-539717 kubelet[2857]: W0717 22:58:51.480695    2857 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod75a0f7be-0b88-43b5-bcad-90f2e0baba51/eb6e2ad32d23a9bba7428f950e29680258355a189be06ae3a157fd49d96f0d0a": none of the resources are being tracked.
	Jul 17 22:58:51 ingress-addon-legacy-539717 kubelet[2857]: W0717 22:58:51.905120    2857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-gdzx7 through plugin: invalid network status for
	Jul 17 22:58:51 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:51.910303    2857 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e87b5106bcdf9dbfccacdcca7450982fd7374e0814be10b278d7f0dc201ab8c9
	Jul 17 22:58:51 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:51.910710    2857 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: eb6e2ad32d23a9bba7428f950e29680258355a189be06ae3a157fd49d96f0d0a
	Jul 17 22:58:51 ingress-addon-legacy-539717 kubelet[2857]: E0717 22:58:51.911015    2857 pod_workers.go:191] Error syncing pod 75a0f7be-0b88-43b5-bcad-90f2e0baba51 ("hello-world-app-5f5d8b66bb-gdzx7_default(75a0f7be-0b88-43b5-bcad-90f2e0baba51)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-gdzx7_default(75a0f7be-0b88-43b5-bcad-90f2e0baba51)"
	Jul 17 22:58:52 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:58:52.928377    2857 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 842a26d6b3adf0ff5985567d7f949aa20e9e6adc472a45fe946fb999a8a6e1aa
	Jul 17 22:58:52 ingress-addon-legacy-539717 kubelet[2857]: W0717 22:58:52.934711    2857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-gdzx7 through plugin: invalid network status for
	Jul 17 22:59:01 ingress-addon-legacy-539717 kubelet[2857]: E0717 22:59:01.174567    2857 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-88m29.1772c9d853adeddd", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-88m29", UID:"546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2", APIVersion:"v1", ResourceVersion:"462", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-539717"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258fcd4a371bdd, ext:112601437091, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258fcd4a371bdd, ext:112601437091, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-88m29.1772c9d853adeddd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 22:59:01 ingress-addon-legacy-539717 kubelet[2857]: E0717 22:59:01.197815    2857 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-88m29.1772c9d853adeddd", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-88m29", UID:"546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2", APIVersion:"v1", ResourceVersion:"462", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-539717"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258fcd4a371bdd, ext:112601437091, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258fcd4b3bd6ec, ext:112618524338, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-88m29.1772c9d853adeddd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 22:59:04 ingress-addon-legacy-539717 kubelet[2857]: W0717 22:59:04.041241    2857 pod_container_deletor.go:77] Container "153f4ef22e299a8949d45bfd663430cbca0ff9123aedf07a77a389a300e64a52" not found in pod's containers
	Jul 17 22:59:04 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:04.306037    2857 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: eb6e2ad32d23a9bba7428f950e29680258355a189be06ae3a157fd49d96f0d0a
	Jul 17 22:59:04 ingress-addon-legacy-539717 kubelet[2857]: E0717 22:59:04.306522    2857 pod_workers.go:191] Error syncing pod 75a0f7be-0b88-43b5-bcad-90f2e0baba51 ("hello-world-app-5f5d8b66bb-gdzx7_default(75a0f7be-0b88-43b5-bcad-90f2e0baba51)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-gdzx7_default(75a0f7be-0b88-43b5-bcad-90f2e0baba51)"
	Jul 17 22:59:05 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:05.342488    2857 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-grt5n" (UniqueName: "kubernetes.io/secret/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2-ingress-nginx-token-grt5n") pod "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2" (UID: "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2")
	Jul 17 22:59:05 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:05.342540    2857 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2-webhook-cert") pod "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2" (UID: "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2")
	Jul 17 22:59:05 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:05.349351    2857 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2" (UID: "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:59:05 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:05.350976    2857 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2-ingress-nginx-token-grt5n" (OuterVolumeSpecName: "ingress-nginx-token-grt5n") pod "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2" (UID: "546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2"). InnerVolumeSpecName "ingress-nginx-token-grt5n". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:59:05 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:05.442873    2857 reconciler.go:319] Volume detached for volume "ingress-nginx-token-grt5n" (UniqueName: "kubernetes.io/secret/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2-ingress-nginx-token-grt5n") on node "ingress-addon-legacy-539717" DevicePath ""
	Jul 17 22:59:05 ingress-addon-legacy-539717 kubelet[2857]: I0717 22:59:05.442926    2857 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2-webhook-cert") on node "ingress-addon-legacy-539717" DevicePath ""
	Jul 17 22:59:06 ingress-addon-legacy-539717 kubelet[2857]: W0717 22:59:06.320495    2857 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/546ef1ad-5a1f-46ed-9368-e6dfbfc0b7b2/volumes" does not exist
	
	* 
	* ==> storage-provisioner [262ebd1999e8] <==
	* I0717 22:57:27.013413       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:57:27.037443       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:57:27.037847       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:57:27.051409       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:57:27.052019       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4527294b-ac2e-45f7-9ce8-172880bf0711", APIVersion:"v1", ResourceVersion:"384", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-539717_16513b58-dd06-444b-8ab1-070758194a75 became leader
	I0717 22:57:27.052208       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-539717_16513b58-dd06-444b-8ab1-070758194a75!
	I0717 22:57:27.153126       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-539717_16513b58-dd06-444b-8ab1-070758194a75!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-539717 -n ingress-addon-legacy-539717
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-539717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (59.43s)

                                                
                                    

Test pass (292/319)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.95
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.44
10 TestDownloadOnly/v1.27.3/json-events 11.11
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.61
20 TestOffline 77.13
22 TestAddons/Setup 158.45
24 TestAddons/parallel/Registry 15.4
26 TestAddons/parallel/InspektorGadget 10.95
27 TestAddons/parallel/MetricsServer 6.02
30 TestAddons/parallel/CSI 54.86
31 TestAddons/parallel/Headlamp 12.98
32 TestAddons/parallel/CloudSpanner 5.71
35 TestAddons/serial/GCPAuth/Namespaces 0.2
36 TestAddons/StoppedEnableDisable 11.36
37 TestCertOptions 40.59
38 TestCertExpiration 259.12
39 TestDockerFlags 45.37
40 TestForceSystemdFlag 46.36
41 TestForceSystemdEnv 42.56
47 TestErrorSpam/setup 38.07
48 TestErrorSpam/start 0.8
49 TestErrorSpam/status 1.09
50 TestErrorSpam/pause 1.42
51 TestErrorSpam/unpause 1.55
52 TestErrorSpam/stop 2.25
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 62.66
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 36.79
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.12
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.18
64 TestFunctional/serial/CacheCmd/cache/add_local 0.98
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
69 TestFunctional/serial/CacheCmd/cache/delete 0.11
70 TestFunctional/serial/MinikubeKubectlCmd 0.14
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 43.02
73 TestFunctional/serial/ComponentHealth 0.11
74 TestFunctional/serial/LogsCmd 1.49
75 TestFunctional/serial/LogsFileCmd 1.42
76 TestFunctional/serial/InvalidService 5.59
78 TestFunctional/parallel/ConfigCmd 0.53
79 TestFunctional/parallel/DashboardCmd 14.86
80 TestFunctional/parallel/DryRun 0.71
81 TestFunctional/parallel/InternationalLanguage 0.26
82 TestFunctional/parallel/StatusCmd 1.11
86 TestFunctional/parallel/ServiceCmdConnect 7.73
87 TestFunctional/parallel/AddonsCmd 0.2
88 TestFunctional/parallel/PersistentVolumeClaim 26.41
90 TestFunctional/parallel/SSHCmd 0.84
91 TestFunctional/parallel/CpCmd 1.56
93 TestFunctional/parallel/FileSync 0.34
94 TestFunctional/parallel/CertSync 2.31
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 0.86
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
109 TestFunctional/parallel/ImageCommands/ImageBuild 3.47
110 TestFunctional/parallel/ImageCommands/Setup 1.96
111 TestFunctional/parallel/DockerEnv/bash 1.8
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.52
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
116 TestFunctional/parallel/ServiceCmd/DeployApp 11.31
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.89
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.91
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.86
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
121 TestFunctional/parallel/ServiceCmd/List 0.49
122 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.68
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
125 TestFunctional/parallel/ServiceCmd/Format 0.57
126 TestFunctional/parallel/ServiceCmd/URL 0.54
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.68
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.59
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
140 TestFunctional/parallel/ProfileCmd/profile_list 0.5
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
142 TestFunctional/parallel/MountCmd/any-port 8.74
143 TestFunctional/parallel/MountCmd/specific-port 2.37
144 TestFunctional/parallel/MountCmd/VerifyCleanup 2.41
145 TestFunctional/delete_addon-resizer_images 0.08
146 TestFunctional/delete_my-image_image 0.02
147 TestFunctional/delete_minikube_cached_images 0.02
151 TestImageBuild/serial/Setup 37.21
152 TestImageBuild/serial/NormalBuild 1.99
153 TestImageBuild/serial/BuildWithBuildArg 0.96
154 TestImageBuild/serial/BuildWithDockerIgnore 0.75
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
158 TestIngressAddonLegacy/StartLegacyK8sCluster 103.84
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.98
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.72
165 TestJSONOutput/start/Command 60.96
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.63
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.59
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 5.87
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.24
190 TestKicCustomNetwork/create_custom_network 33.66
191 TestKicCustomNetwork/use_default_bridge_network 34.37
192 TestKicExistingNetwork 37.13
193 TestKicCustomSubnet 37.88
194 TestKicStaticIP 36.22
195 TestMainNoArgs 0.05
196 TestMinikubeProfile 75.69
199 TestMountStart/serial/StartWithMountFirst 8.03
200 TestMountStart/serial/VerifyMountFirst 0.28
201 TestMountStart/serial/StartWithMountSecond 7.93
202 TestMountStart/serial/VerifyMountSecond 0.29
203 TestMountStart/serial/DeleteFirst 1.51
204 TestMountStart/serial/VerifyMountPostDelete 0.28
205 TestMountStart/serial/Stop 1.24
206 TestMountStart/serial/RestartStopped 8.2
207 TestMountStart/serial/VerifyMountPostStop 0.28
210 TestMultiNode/serial/FreshStart2Nodes 78.89
211 TestMultiNode/serial/DeployApp2Nodes 36.65
212 TestMultiNode/serial/PingHostFrom2Pods 1.2
213 TestMultiNode/serial/AddNode 21.83
214 TestMultiNode/serial/ProfileList 0.36
215 TestMultiNode/serial/CopyFile 11.24
216 TestMultiNode/serial/StopNode 2.46
217 TestMultiNode/serial/StartAfterStop 14.5
218 TestMultiNode/serial/RestartKeepsNodes 121.87
219 TestMultiNode/serial/DeleteNode 5.37
220 TestMultiNode/serial/StopMultiNode 21.88
221 TestMultiNode/serial/RestartMultiNode 86.76
222 TestMultiNode/serial/ValidateNameConflict 43.75
227 TestPreload 168.71
229 TestScheduledStopUnix 109.49
230 TestSkaffold 112.61
232 TestInsufficientStorage 11.4
233 TestRunningBinaryUpgrade 124.44
235 TestKubernetesUpgrade 151.22
236 TestMissingContainerUpgrade 196.91
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
239 TestNoKubernetes/serial/StartWithK8s 47.03
240 TestNoKubernetes/serial/StartWithStopK8s 8.18
241 TestNoKubernetes/serial/Start 10.33
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.5
243 TestNoKubernetes/serial/ProfileList 1.65
244 TestNoKubernetes/serial/Stop 1.37
245 TestNoKubernetes/serial/StartNoArgs 8.44
246 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
247 TestStoppedBinaryUpgrade/Setup 1.18
248 TestStoppedBinaryUpgrade/Upgrade 108.45
249 TestStoppedBinaryUpgrade/MinikubeLogs 2.41
258 TestPause/serial/Start 77.11
259 TestPause/serial/SecondStartNoReconfiguration 38.97
271 TestPause/serial/Pause 0.98
272 TestPause/serial/VerifyStatus 0.43
273 TestPause/serial/Unpause 0.87
274 TestPause/serial/PauseAgain 1.1
275 TestPause/serial/DeletePaused 2.45
276 TestPause/serial/VerifyDeletedResources 0.47
278 TestStartStop/group/old-k8s-version/serial/FirstStart 136.7
279 TestStartStop/group/old-k8s-version/serial/DeployApp 9.74
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.74
282 TestStartStop/group/no-preload/serial/FirstStart 76
283 TestStartStop/group/old-k8s-version/serial/Stop 11.78
284 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
285 TestStartStop/group/old-k8s-version/serial/SecondStart 448.74
286 TestStartStop/group/no-preload/serial/DeployApp 8.52
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
288 TestStartStop/group/no-preload/serial/Stop 11.1
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
290 TestStartStop/group/no-preload/serial/SecondStart 342
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
294 TestStartStop/group/no-preload/serial/Pause 3.26
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
297 TestStartStop/group/embed-certs/serial/FirstStart 75.07
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.21
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.46
300 TestStartStop/group/old-k8s-version/serial/Pause 3.85
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.77
303 TestStartStop/group/embed-certs/serial/DeployApp 9.81
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.97
305 TestStartStop/group/embed-certs/serial/Stop 11.04
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.58
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/embed-certs/serial/SecondStart 351.7
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.7
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.96
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.6
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.03
314 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.03
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.52
317 TestStartStop/group/embed-certs/serial/Pause 4.63
319 TestStartStop/group/newest-cni/serial/FirstStart 54.17
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.48
323 TestNetworkPlugins/group/auto/Start 69.45
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.74
326 TestStartStop/group/newest-cni/serial/Stop 11.41
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
328 TestStartStop/group/newest-cni/serial/SecondStart 36.89
329 TestNetworkPlugins/group/auto/KubeletFlags 0.32
330 TestNetworkPlugins/group/auto/NetCatPod 11.42
331 TestNetworkPlugins/group/auto/DNS 0.35
332 TestNetworkPlugins/group/auto/Localhost 0.32
333 TestNetworkPlugins/group/auto/HairPin 0.29
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
337 TestStartStop/group/newest-cni/serial/Pause 4.71
338 TestNetworkPlugins/group/kindnet/Start 69.66
339 TestNetworkPlugins/group/calico/Start 86.94
340 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
342 TestNetworkPlugins/group/kindnet/NetCatPod 14.62
343 TestNetworkPlugins/group/kindnet/DNS 0.25
344 TestNetworkPlugins/group/kindnet/Localhost 0.25
345 TestNetworkPlugins/group/kindnet/HairPin 0.29
346 TestNetworkPlugins/group/calico/ControllerPod 5.04
347 TestNetworkPlugins/group/calico/KubeletFlags 0.51
348 TestNetworkPlugins/group/calico/NetCatPod 14.65
349 TestNetworkPlugins/group/custom-flannel/Start 73.42
350 TestNetworkPlugins/group/calico/DNS 0.39
351 TestNetworkPlugins/group/calico/Localhost 0.47
352 TestNetworkPlugins/group/calico/HairPin 0.46
353 TestNetworkPlugins/group/false/Start 93.98
354 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.49
356 TestNetworkPlugins/group/custom-flannel/DNS 0.25
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
359 TestNetworkPlugins/group/enable-default-cni/Start 90.78
360 TestNetworkPlugins/group/false/KubeletFlags 0.45
361 TestNetworkPlugins/group/false/NetCatPod 10.65
362 TestNetworkPlugins/group/false/DNS 0.32
363 TestNetworkPlugins/group/false/Localhost 0.33
364 TestNetworkPlugins/group/false/HairPin 0.29
365 TestNetworkPlugins/group/flannel/Start 65.79
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.49
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
371 TestNetworkPlugins/group/flannel/ControllerPod 5.04
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
373 TestNetworkPlugins/group/flannel/NetCatPod 15.48
374 TestNetworkPlugins/group/bridge/Start 89.57
375 TestNetworkPlugins/group/flannel/DNS 0.27
376 TestNetworkPlugins/group/flannel/Localhost 0.24
377 TestNetworkPlugins/group/flannel/HairPin 0.25
378 TestNetworkPlugins/group/kubenet/Start 57.56
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
380 TestNetworkPlugins/group/bridge/NetCatPod 11.52
381 TestNetworkPlugins/group/kubenet/KubeletFlags 0.33
382 TestNetworkPlugins/group/kubenet/NetCatPod 9.4
383 TestNetworkPlugins/group/bridge/DNS 0.31
384 TestNetworkPlugins/group/bridge/Localhost 0.26
385 TestNetworkPlugins/group/bridge/HairPin 0.23
386 TestNetworkPlugins/group/kubenet/DNS 0.43
387 TestNetworkPlugins/group/kubenet/Localhost 0.28
388 TestNetworkPlugins/group/kubenet/HairPin 0.25
x
+
TestDownloadOnly/v1.16.0/json-events (10.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-516896 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-516896 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.946737271s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-516896
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-516896: exit status 85 (443.249318ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-516896 | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |          |
	|         | -p download-only-516896        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:46:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:46:08.401892 1390053 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:46:08.402031 1390053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:46:08.402039 1390053 out.go:309] Setting ErrFile to fd 2...
	I0717 22:46:08.402045 1390053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:46:08.402330 1390053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	W0717 22:46:08.402457 1390053 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-1384661/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-1384661/.minikube/config/config.json: no such file or directory
	I0717 22:46:08.402863 1390053 out.go:303] Setting JSON to true
	I0717 22:46:08.403898 1390053 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23316,"bootTime":1689610653,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 22:46:08.403969 1390053 start.go:138] virtualization:  
	I0717 22:46:08.407265 1390053 out.go:97] [download-only-516896] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 22:46:08.409620 1390053 out.go:169] MINIKUBE_LOCATION=16899
	W0717 22:46:08.407453 1390053 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 22:46:08.407513 1390053 notify.go:220] Checking for updates...
	I0717 22:46:08.415068 1390053 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:46:08.417865 1390053 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:46:08.420026 1390053 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	I0717 22:46:08.422429 1390053 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 22:46:08.426803 1390053 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 22:46:08.427135 1390053 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:46:08.450816 1390053 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:46:08.450911 1390053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:46:08.534935 1390053 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-17 22:46:08.525026697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:46:08.535045 1390053 docker.go:294] overlay module found
	I0717 22:46:08.537085 1390053 out.go:97] Using the docker driver based on user configuration
	I0717 22:46:08.537115 1390053 start.go:298] selected driver: docker
	I0717 22:46:08.537123 1390053 start.go:880] validating driver "docker" against <nil>
	I0717 22:46:08.537246 1390053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:46:08.611599 1390053 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-17 22:46:08.601493207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:46:08.611771 1390053 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:46:08.612032 1390053 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 22:46:08.612189 1390053 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 22:46:08.614491 1390053 out.go:169] Using Docker driver with root privileges
	I0717 22:46:08.616675 1390053 cni.go:84] Creating CNI manager for ""
	I0717 22:46:08.616708 1390053 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 22:46:08.616724 1390053 start_flags.go:319] config:
	{Name:download-only-516896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-516896 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:46:08.618917 1390053 out.go:97] Starting control plane node download-only-516896 in cluster download-only-516896
	I0717 22:46:08.618957 1390053 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 22:46:08.621029 1390053 out.go:97] Pulling base image ...
	I0717 22:46:08.621070 1390053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 22:46:08.621217 1390053 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:46:08.639048 1390053 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 22:46:08.639198 1390053 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 22:46:08.639296 1390053 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 22:46:08.693510 1390053 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0717 22:46:08.693550 1390053 cache.go:57] Caching tarball of preloaded images
	I0717 22:46:08.693697 1390053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 22:46:08.695946 1390053 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 22:46:08.695971 1390053 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:46:08.821880 1390053 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0717 22:46:13.750207 1390053 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 22:46:16.782116 1390053 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:46:16.782252 1390053 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:46:17.668295 1390053 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 22:46:17.668735 1390053 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/download-only-516896/config.json ...
	I0717 22:46:17.668772 1390053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/download-only-516896/config.json: {Name:mk916d5f3b58b67c51dc87fb9f907f318c68097d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:46:17.668992 1390053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 22:46:17.669198 1390053 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-516896"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (11.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-516896 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-516896 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.109249173s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (11.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-516896
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-516896: exit status 85 (79.04029ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-516896 | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |          |
	|         | -p download-only-516896        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-516896 | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |          |
	|         | -p download-only-516896        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:46:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:46:19.792208 1390129 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:46:19.792396 1390129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:46:19.792416 1390129 out.go:309] Setting ErrFile to fd 2...
	I0717 22:46:19.792433 1390129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:46:19.792745 1390129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	W0717 22:46:19.792909 1390129 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-1384661/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-1384661/.minikube/config/config.json: no such file or directory
	I0717 22:46:19.793165 1390129 out.go:303] Setting JSON to true
	I0717 22:46:19.794208 1390129 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23327,"bootTime":1689610653,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 22:46:19.794305 1390129 start.go:138] virtualization:  
	I0717 22:46:19.820432 1390129 out.go:97] [download-only-516896] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 22:46:19.820808 1390129 notify.go:220] Checking for updates...
	I0717 22:46:19.852254 1390129 out.go:169] MINIKUBE_LOCATION=16899
	I0717 22:46:19.884678 1390129 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:46:19.922665 1390129 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:46:19.948925 1390129 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	I0717 22:46:19.982026 1390129 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 22:46:20.060151 1390129 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 22:46:20.061368 1390129 config.go:182] Loaded profile config "download-only-516896": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0717 22:46:20.061436 1390129 start.go:788] api.Load failed for download-only-516896: filestore "download-only-516896": Docker machine "download-only-516896" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 22:46:20.061580 1390129 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 22:46:20.061609 1390129 start.go:788] api.Load failed for download-only-516896: filestore "download-only-516896": Docker machine "download-only-516896" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 22:46:20.087174 1390129 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:46:20.087275 1390129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:46:20.174319 1390129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 22:46:20.163413728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:46:20.174437 1390129 docker.go:294] overlay module found
	I0717 22:46:20.176939 1390129 out.go:97] Using the docker driver based on existing profile
	I0717 22:46:20.177004 1390129 start.go:298] selected driver: docker
	I0717 22:46:20.177013 1390129 start.go:880] validating driver "docker" against &{Name:download-only-516896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-516896 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:46:20.177245 1390129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:46:20.257295 1390129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 22:46:20.247311581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:46:20.257735 1390129 cni.go:84] Creating CNI manager for ""
	I0717 22:46:20.257755 1390129 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 22:46:20.257766 1390129 start_flags.go:319] config:
	{Name:download-only-516896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-516896 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:46:20.260359 1390129 out.go:97] Starting control plane node download-only-516896 in cluster download-only-516896
	I0717 22:46:20.260382 1390129 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 22:46:20.262443 1390129 out.go:97] Pulling base image ...
	I0717 22:46:20.262473 1390129 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 22:46:20.262622 1390129 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:46:20.279403 1390129 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 22:46:20.279527 1390129 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 22:46:20.279555 1390129 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 22:46:20.279562 1390129 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 22:46:20.279570 1390129 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 22:46:20.330509 1390129 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0717 22:46:20.330555 1390129 cache.go:57] Caching tarball of preloaded images
	I0717 22:46:20.331333 1390129 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 22:46:20.333865 1390129 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 22:46:20.333897 1390129 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:46:20.454608 1390129 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4?checksum=md5:e061b1178966dc348ac19219444153f4 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0717 22:46:28.667887 1390129 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:46:28.667995 1390129 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0717 22:46:29.433308 1390129 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 22:46:29.433457 1390129 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/download-only-516896/config.json ...
	I0717 22:46:29.433677 1390129 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 22:46:29.433897 1390129 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/16899-1384661/.minikube/cache/linux/arm64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-516896"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-516896
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-731746 --alsologtostderr --binary-mirror http://127.0.0.1:41699 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-731746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-731746
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (77.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-096135 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-096135 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m14.721456323s)
helpers_test.go:175: Cleaning up "offline-docker-096135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-096135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-096135: (2.40459073s)
--- PASS: TestOffline (77.13s)

                                                
                                    
x
+
TestAddons/Setup (158.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-534909 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-534909 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m38.448474199s)
--- PASS: TestAddons/Setup (158.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 33.04939ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zwjdj" [81ea78ea-50d4-4d00-b0e6-6217c7bc9dba] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010326865s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-624gr" [39384673-9aaa-4d0e-9753-3aa82af62305] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011914013s
addons_test.go:316: (dbg) Run:  kubectl --context addons-534909 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-534909 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-534909 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.274282661s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 ip
2023/07/17 22:49:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jg9x8" [e33f93d5-695f-41c4-9af2-bad5ebc760cc] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008204773s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-534909
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-534909: (5.943818862s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.664754ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-42sjp" [b402c187-9b1c-4e91-b96c-b1cadf466549] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009463905s
addons_test.go:391: (dbg) Run:  kubectl --context addons-534909 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.202111ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-534909 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-534909 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [12a8ffd7-68b9-49bc-932d-44b5a705b713] Pending
helpers_test.go:344: "task-pv-pod" [12a8ffd7-68b9-49bc-932d-44b5a705b713] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [12a8ffd7-68b9-49bc-932d-44b5a705b713] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.011784718s
addons_test.go:560: (dbg) Run:  kubectl --context addons-534909 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-534909 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-534909 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-534909 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-534909 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-534909 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-534909 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-534909 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5f1decd0-861f-4eeb-97e2-4e1d10bf46ff] Pending
helpers_test.go:344: "task-pv-pod-restore" [5f1decd0-861f-4eeb-97e2-4e1d10bf46ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5f1decd0-861f-4eeb-97e2-4e1d10bf46ff] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.015510441s
addons_test.go:602: (dbg) Run:  kubectl --context addons-534909 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-534909 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-534909 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-534909 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.988891901s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-534909 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-534909 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-534909 --alsologtostderr -v=1: (1.954075798s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-dsvzb" [87a557c1-d928-4744-b93b-9890f6737fc2] Pending
helpers_test.go:344: "headlamp-66f6498c69-dsvzb" [87a557c1-d928-4744-b93b-9890f6737fc2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-dsvzb" [87a557c1-d928-4744-b93b-9890f6737fc2] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.021641742s
--- PASS: TestAddons/parallel/Headlamp (12.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-8jssf" [3503acd4-88ac-4160-8830-7e53477dd4d1] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012765735s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-534909
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-534909 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-534909 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-534909
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-534909: (11.079583711s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-534909
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-534909
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-534909
--- PASS: TestAddons/StoppedEnableDisable (11.36s)

                                                
                                    
x
+
TestCertOptions (40.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-971338 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0717 23:28:55.967516 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-971338 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (37.603403943s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-971338 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-971338 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-971338 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-971338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-971338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-971338: (2.256273553s)
--- PASS: TestCertOptions (40.59s)

                                                
                                    
x
+
TestCertExpiration (259.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-780740 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0717 23:27:14.154844 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-780740 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (48.753162657s)
E0717 23:28:10.424481 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-780740 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-780740 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.891685396s)
helpers_test.go:175: Cleaning up "cert-expiration-780740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-780740
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-780740: (2.472657292s)
--- PASS: TestCertExpiration (259.12s)

                                                
                                    
x
+
TestDockerFlags (45.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-720593 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-720593 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.188598245s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-720593 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-720593 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-720593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-720593
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-720593: (2.336333408s)
--- PASS: TestDockerFlags (45.37s)

                                                
                                    
x
+
TestForceSystemdFlag (46.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-121283 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-121283 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.177030207s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-121283 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-121283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-121283
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-121283: (2.566551968s)
--- PASS: TestForceSystemdFlag (46.36s)

                                                
                                    
x
+
TestForceSystemdEnv (42.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-888366 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-888366 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.724910789s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-888366 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-888366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-888366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-888366: (2.327120666s)
--- PASS: TestForceSystemdEnv (42.56s)

                                                
                                    
x
+
TestErrorSpam/setup (38.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-212671 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212671 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-212671 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-212671 --driver=docker  --container-runtime=docker: (38.065315239s)
--- PASS: TestErrorSpam/setup (38.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 stop: (2.043847246s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-212671 --log_dir /tmp/nospam-212671 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16899-1384661/.minikube/files/etc/test/nested/copy/1390047/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034372 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-034372 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m2.66124607s)
--- PASS: TestFunctional/serial/StartWithProxy (62.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034372 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-034372 --alsologtostderr -v=8: (36.783563465s)
functional_test.go:659: soft start took 36.784063053s for "functional-034372" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-034372 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 cache add registry.k8s.io/pause:3.1: (1.08126669s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 cache add registry.k8s.io/pause:3.3: (1.157606324s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-034372 /tmp/TestFunctionalserialCacheCmdcacheadd_local248499587/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cache add minikube-local-cache-test:functional-034372
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cache delete minikube-local-cache-test:functional-034372
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-034372
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (333.826207ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 kubectl -- --context functional-034372 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-034372 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034372 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 22:54:11.108493 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.114660 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.124928 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.145044 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.186757 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.267001 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.427328 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:11.747799 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:12.388074 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:54:13.668298 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-034372 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.022802654s)
functional_test.go:757: restart took 43.022936045s for "functional-034372" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-034372 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 logs: (1.485068659s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 logs --file /tmp/TestFunctionalserialLogsFileCmd2898926920/001/logs.txt
E0717 22:54:16.229081 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 logs --file /tmp/TestFunctionalserialLogsFileCmd2898926920/001/logs.txt: (1.421588569s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-034372 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-034372
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-034372: exit status 115 (491.421826ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30774 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-034372 delete -f testdata/invalidsvc.yaml
E0717 22:54:21.349835 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
functional_test.go:2323: (dbg) Done: kubectl --context functional-034372 delete -f testdata/invalidsvc.yaml: (1.753818293s)
--- PASS: TestFunctional/serial/InvalidService (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 config get cpus: exit status 14 (74.654253ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 config get cpus: exit status 14 (77.106095ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-034372 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-034372 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1430673: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-034372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (274.906207ms)

                                                
                                                
-- stdout --
	* [functional-034372] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:55:12.479616 1430015 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:55:12.479803 1430015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:55:12.479813 1430015 out.go:309] Setting ErrFile to fd 2...
	I0717 22:55:12.479823 1430015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:55:12.480141 1430015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	I0717 22:55:12.480517 1430015 out.go:303] Setting JSON to false
	I0717 22:55:12.481792 1430015 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23860,"bootTime":1689610653,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 22:55:12.481856 1430015 start.go:138] virtualization:  
	I0717 22:55:12.484986 1430015 out.go:177] * [functional-034372] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 22:55:12.487340 1430015 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:55:12.487410 1430015 notify.go:220] Checking for updates...
	I0717 22:55:12.489604 1430015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:55:12.491564 1430015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:55:12.493493 1430015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	I0717 22:55:12.495415 1430015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 22:55:12.497679 1430015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:55:12.500440 1430015 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 22:55:12.501087 1430015 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:55:12.528624 1430015 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:55:12.528724 1430015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:55:12.654289 1430015 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 22:55:12.643668546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:55:12.654406 1430015 docker.go:294] overlay module found
	I0717 22:55:12.656510 1430015 out.go:177] * Using the docker driver based on existing profile
	I0717 22:55:12.657976 1430015 start.go:298] selected driver: docker
	I0717 22:55:12.657995 1430015 start.go:880] validating driver "docker" against &{Name:functional-034372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-034372 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:55:12.658105 1430015 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:55:12.660110 1430015 out.go:177] 
	W0717 22:55:12.661693 1430015 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 22:55:12.664115 1430015 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034372 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-034372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-034372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (259.783482ms)

                                                
                                                
-- stdout --
	* [functional-034372] minikube v1.31.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:55:13.185164 1430207 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:55:13.185390 1430207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:55:13.185417 1430207 out.go:309] Setting ErrFile to fd 2...
	I0717 22:55:13.185436 1430207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:55:13.185859 1430207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	I0717 22:55:13.186440 1430207 out.go:303] Setting JSON to false
	I0717 22:55:13.187669 1430207 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23861,"bootTime":1689610653,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 22:55:13.187768 1430207 start.go:138] virtualization:  
	I0717 22:55:13.191955 1430207 out.go:177] * [functional-034372] minikube v1.31.0 sur Ubuntu 20.04 (arm64)
	I0717 22:55:13.194003 1430207 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:55:13.196171 1430207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:55:13.194145 1430207 notify.go:220] Checking for updates...
	I0717 22:55:13.198559 1430207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	I0717 22:55:13.200940 1430207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	I0717 22:55:13.203003 1430207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 22:55:13.204965 1430207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:55:13.207406 1430207 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 22:55:13.207995 1430207 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:55:13.235391 1430207 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:55:13.235494 1430207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:55:13.347688 1430207 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 22:55:13.337113643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 22:55:13.347825 1430207 docker.go:294] overlay module found
	I0717 22:55:13.351993 1430207 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 22:55:13.354779 1430207 start.go:298] selected driver: docker
	I0717 22:55:13.354803 1430207 start.go:880] validating driver "docker" against &{Name:functional-034372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-034372 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:55:13.354934 1430207 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:55:13.357792 1430207 out.go:177] 
	W0717 22:55:13.359998 1430207 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 22:55:13.361786 1430207 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-034372 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-034372 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-85sv6" [1730afa9-5032-48aa-80c2-36215566c9b3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-85sv6" [1730afa9-5032-48aa-80c2-36215566c9b3] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.009111548s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30727
functional_test.go:1674: http://192.168.49.2:30727: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-85sv6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30727
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [57f58de8-d841-4ea9-8bb8-266423bdcf91] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009768644s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-034372 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-034372 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-034372 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-034372 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a9d06cbc-2cb1-4d7e-b76c-fdc11bdf2899] Pending
E0717 22:54:52.071245 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [a9d06cbc-2cb1-4d7e-b76c-fdc11bdf2899] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a9d06cbc-2cb1-4d7e-b76c-fdc11bdf2899] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008573979s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-034372 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-034372 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-034372 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [39e873a9-0781-41b0-bf49-8e63ef493cf8] Pending
helpers_test.go:344: "sp-pod" [39e873a9-0781-41b0-bf49-8e63ef493cf8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [39e873a9-0781-41b0-bf49-8e63ef493cf8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.023606515s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-034372 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh -n functional-034372 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 cp functional-034372:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1669940132/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh -n functional-034372 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1390047/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /etc/test/nested/copy/1390047/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1390047.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /etc/ssl/certs/1390047.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1390047.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /usr/share/ca-certificates/1390047.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13900472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /etc/ssl/certs/13900472.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13900472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /usr/share/ca-certificates/13900472.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-034372 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh "sudo systemctl is-active crio": exit status 1 (389.642342ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034372 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-034372
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-034372
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034372 image ls --format short --alsologtostderr:
I0717 22:55:19.928283 1431625 out.go:296] Setting OutFile to fd 1 ...
I0717 22:55:19.928534 1431625 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:19.928563 1431625 out.go:309] Setting ErrFile to fd 2...
I0717 22:55:19.928584 1431625 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:19.928877 1431625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
I0717 22:55:19.929537 1431625 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:19.929693 1431625 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:19.930232 1431625 cli_runner.go:164] Run: docker container inspect functional-034372 --format={{.State.Status}}
I0717 22:55:19.952147 1431625 ssh_runner.go:195] Run: systemctl --version
I0717 22:55:19.952203 1431625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034372
I0717 22:55:19.979851 1431625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34336 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/functional-034372/id_rsa Username:docker}
I0717 22:55:20.128025 1431625 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034372 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | 2002d33a54f72 | 192MB  |
| registry.k8s.io/kube-apiserver              | v1.27.3            | 39dfb036b0986 | 115MB  |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | b18bf71b941ba | 59.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | 24bc64e911039 | 181MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>             | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1                | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-proxy                  | v1.27.3            | fb73e92641fd5 | 66.5MB |
| registry.k8s.io/pause                       | latest             | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-034372  | ede3bef5e75e9 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-034372  | 17a3bfeb1e6e3 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.27.3            | bcb9e554eaab6 | 56.2MB |
| registry.k8s.io/pause                       | 3.9                | 829e9de338bd5 | 514kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine             | 66bf2c914bf4d | 41MB   |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | ab3683b584ae5 | 107MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | 97e04611ad434 | 51.4MB |
| gcr.io/google-containers/addon-resizer      | functional-034372  | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3                | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034372 image ls --format table --alsologtostderr:
I0717 22:55:24.293616 1431940 out.go:296] Setting OutFile to fd 1 ...
I0717 22:55:24.293861 1431940 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:24.293888 1431940 out.go:309] Setting ErrFile to fd 2...
I0717 22:55:24.293908 1431940 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:24.294195 1431940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
I0717 22:55:24.294808 1431940 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:24.295004 1431940 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:24.295489 1431940 cli_runner.go:164] Run: docker container inspect functional-034372 --format={{.State.Status}}
I0717 22:55:24.337790 1431940 ssh_runner.go:195] Run: systemctl --version
I0717 22:55:24.337848 1431940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034372
I0717 22:55:24.357188 1431940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34336 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/functional-034372/id_rsa Username:docker}
I0717 22:55:24.455552 1431940 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/07/17 22:55:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034372 image ls --format json --alsologtostderr:
[{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"107000000"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":[],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"59200000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"56200000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb28
8fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"66500000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-034372"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":
["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ede3bef5e75e9db97ab83efda790f4a6e9f7508675bd7ddb886238c989410656","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-034372"],"size":"1410000"},{"id":"17a3bfeb1e6e32872d22af5180debca2680550a56e46af15ded78d97d7c6c7e4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-034372"],"size":"30"},{"id":"2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"115000000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"1
81000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034372 image ls --format json --alsologtostderr:
I0717 22:55:24.008797 1431913 out.go:296] Setting OutFile to fd 1 ...
I0717 22:55:24.009089 1431913 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:24.009119 1431913 out.go:309] Setting ErrFile to fd 2...
I0717 22:55:24.009140 1431913 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:24.009465 1431913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
I0717 22:55:24.010232 1431913 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:24.010437 1431913 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:24.011012 1431913 cli_runner.go:164] Run: docker container inspect functional-034372 --format={{.State.Status}}
I0717 22:55:24.040827 1431913 ssh_runner.go:195] Run: systemctl --version
I0717 22:55:24.040913 1431913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034372
I0717 22:55:24.069468 1431913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34336 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/functional-034372/id_rsa Username:docker}
I0717 22:55:24.171008 1431913 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-034372 image ls --format yaml --alsologtostderr:
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "66500000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "107000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-034372
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 17a3bfeb1e6e32872d22af5180debca2680550a56e46af15ded78d97d7c6c7e4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-034372
size: "30"
- id: 2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "115000000"
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "56200000"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests: []
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "59200000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034372 image ls --format yaml --alsologtostderr:
I0717 22:55:20.251225 1431653 out.go:296] Setting OutFile to fd 1 ...
I0717 22:55:20.251449 1431653 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:20.251455 1431653 out.go:309] Setting ErrFile to fd 2...
I0717 22:55:20.251460 1431653 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:20.251721 1431653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
I0717 22:55:20.252412 1431653 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:20.252531 1431653 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:20.253010 1431653 cli_runner.go:164] Run: docker container inspect functional-034372 --format={{.State.Status}}
I0717 22:55:20.279039 1431653 ssh_runner.go:195] Run: systemctl --version
I0717 22:55:20.279093 1431653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034372
I0717 22:55:20.305404 1431653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34336 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/functional-034372/id_rsa Username:docker}
I0717 22:55:20.416110 1431653 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh pgrep buildkitd: exit status 1 (412.945602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image build -t localhost/my-image:functional-034372 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 image build -t localhost/my-image:functional-034372 testdata/build --alsologtostderr: (2.795982132s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-034372 image build -t localhost/my-image:functional-034372 testdata/build --alsologtostderr:
I0717 22:55:20.965963 1431739 out.go:296] Setting OutFile to fd 1 ...
I0717 22:55:20.966886 1431739 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:20.966895 1431739 out.go:309] Setting ErrFile to fd 2...
I0717 22:55:20.966901 1431739 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:55:20.967187 1431739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
I0717 22:55:20.967901 1431739 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:20.968498 1431739 config.go:182] Loaded profile config "functional-034372": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 22:55:20.969024 1431739 cli_runner.go:164] Run: docker container inspect functional-034372 --format={{.State.Status}}
I0717 22:55:20.990910 1431739 ssh_runner.go:195] Run: systemctl --version
I0717 22:55:20.990962 1431739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-034372
I0717 22:55:21.024296 1431739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34336 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/functional-034372/id_rsa Username:docker}
I0717 22:55:21.131609 1431739 build_images.go:151] Building image from path: /tmp/build.3681588854.tar
I0717 22:55:21.131686 1431739 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 22:55:21.145067 1431739 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3681588854.tar
I0717 22:55:21.151095 1431739 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3681588854.tar: stat -c "%s %y" /var/lib/minikube/build/build.3681588854.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3681588854.tar': No such file or directory
I0717 22:55:21.151124 1431739 ssh_runner.go:362] scp /tmp/build.3681588854.tar --> /var/lib/minikube/build/build.3681588854.tar (3072 bytes)
I0717 22:55:21.197431 1431739 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3681588854
I0717 22:55:21.208830 1431739 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3681588854 -xf /var/lib/minikube/build/build.3681588854.tar
I0717 22:55:21.221497 1431739 docker.go:339] Building image: /var/lib/minikube/build/build.3681588854
I0717 22:55:21.221638 1431739 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-034372 /var/lib/minikube/build/build.3681588854
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.8s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ede3bef5e75e9db97ab83efda790f4a6e9f7508675bd7ddb886238c989410656 done
#8 naming to localhost/my-image:functional-034372 done
#8 DONE 0.1s
I0717 22:55:23.646782 1431739 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-034372 /var/lib/minikube/build/build.3681588854: (2.425099739s)
I0717 22:55:23.646850 1431739 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3681588854
I0717 22:55:23.658849 1431739 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3681588854.tar
I0717 22:55:23.674133 1431739 build_images.go:207] Built localhost/my-image:functional-034372 from /tmp/build.3681588854.tar
I0717 22:55:23.674159 1431739 build_images.go:123] succeeded building to: functional-034372
I0717 22:55:23.674170 1431739 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.922164759s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-034372
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-034372 docker-env) && out/minikube-linux-arm64 status -p functional-034372"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-034372 docker-env) && out/minikube-linux-arm64 status -p functional-034372": (1.079577692s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-034372 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image load --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 image load --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr: (4.180199587s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-034372 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-034372 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-ll4rj" [2add535a-78da-4020-bc14-5fa28be50087] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-ll4rj" [2add535a-78da-4020-bc14-5fa28be50087] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.027067242s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image load --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr
E0717 22:54:31.590819 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 image load --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr: (2.620868798s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.45244157s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-034372
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image load --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 image load --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr: (3.196067006s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image save gcr.io/google-containers/addon-resizer:functional-034372 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image rm gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.350352509s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 service list -o json
functional_test.go:1493: Took "526.457256ms" to run "out/minikube-linux-arm64 -p functional-034372 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31431
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31431
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-034372
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 image save --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-034372 image save --daemon gcr.io/google-containers/addon-resizer:functional-034372 --alsologtostderr: (3.629942302s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-034372
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-034372 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-034372 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-034372 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-034372 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1427668: os: process already finished
helpers_test.go:502: unable to terminate pid 1427561: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-034372 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-034372 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b1648b43-e96d-42eb-a8db-a2210bf93551] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b1648b43-e96d-42eb-a8db-a2210bf93551] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.015026665s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-034372 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.138.164 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-034372 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "410.24241ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "85.56599ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "440.522388ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "95.571589ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdany-port1378504375/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689634504644134520" to /tmp/TestFunctionalparallelMountCmdany-port1378504375/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689634504644134520" to /tmp/TestFunctionalparallelMountCmdany-port1378504375/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689634504644134520" to /tmp/TestFunctionalparallelMountCmdany-port1378504375/001/test-1689634504644134520
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (565.520044ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 22:55 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 22:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 22:55 test-1689634504644134520
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh cat /mount-9p/test-1689634504644134520
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-034372 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [94ecb1b6-ba43-4f56-b18b-d5c340fc968a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [94ecb1b6-ba43-4f56-b18b-d5c340fc968a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [94ecb1b6-ba43-4f56-b18b-d5c340fc968a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01462651s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-034372 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdany-port1378504375/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdspecific-port1332490370/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (560.37553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdspecific-port1332490370/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh "sudo umount -f /mount-9p": exit status 1 (450.522639ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-034372 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdspecific-port1332490370/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136582916/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136582916/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136582916/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T" /mount1: exit status 1 (791.331906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-034372 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-034372 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136582916/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136582916/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-034372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136582916/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-034372
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-034372
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-034372
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-405790 --driver=docker  --container-runtime=docker
E0717 22:55:33.031533 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-405790 --driver=docker  --container-runtime=docker: (37.206911415s)
--- PASS: TestImageBuild/serial/Setup (37.21s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-405790
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-405790: (1.991134314s)
--- PASS: TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-405790
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-405790
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-405790
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (103.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-539717 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0717 22:56:54.952655 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-539717 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m43.843127871s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (103.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons enable ingress --alsologtostderr -v=5: (10.976569235s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-539717 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-673203 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0717 22:59:29.109685 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.114943 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.125236 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.145472 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.185728 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.265997 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.426923 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:29.747462 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:30.388390 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:31.668790 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:34.229030 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:38.793600 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 22:59:39.350065 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 22:59:49.590352 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:00:10.070990 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-673203 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m0.956095358s)
--- PASS: TestJSONOutput/start/Command (60.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-673203 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-673203 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-673203 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-673203 --output=json --user=testUser: (5.873214033s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-102001 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-102001 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.995509ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"501b2801-04b7-4a44-b844-07fcbd8268b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102001] minikube v1.31.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26d0a516-2bf7-436b-996e-af9dda4b97b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"880de614-d898-4a49-8d0a-cef8c83050af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"522838ac-125c-4938-b7eb-272110a9eb13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig"}}
	{"specversion":"1.0","id":"19620a36-b7f2-4a6d-b2cd-562b81589753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube"}}
	{"specversion":"1.0","id":"7ee3bde7-a145-49bf-8e03-8c5e2afe792a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5ca619ab-eebc-4a76-ba7d-a0ce2de492ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"efa239e8-20ee-400d-84a7-2d1ba8a980ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-102001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-102001
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-373876 --network=
E0717 23:00:51.031251 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-373876 --network=: (31.456606019s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-373876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-373876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-373876: (2.175681414s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.66s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-663944 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-663944 --network=bridge: (32.249114271s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-663944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-663944
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-663944: (2.100107888s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.37s)

                                                
                                    
x
+
TestKicExistingNetwork (37.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-575328 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-575328 --network=existing-network: (34.941865446s)
helpers_test.go:175: Cleaning up "existing-network-575328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-575328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-575328: (2.033314898s)
--- PASS: TestKicExistingNetwork (37.13s)

                                                
                                    
x
+
TestKicCustomSubnet (37.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-228934 --subnet=192.168.60.0/24
E0717 23:02:12.953096 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-228934 --subnet=192.168.60.0/24: (35.547838689s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-228934 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-228934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-228934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-228934: (2.30864089s)
--- PASS: TestKicCustomSubnet (37.88s)

                                                
                                    
x
+
TestKicStaticIP (36.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-253837 --static-ip=192.168.200.200
E0717 23:03:10.424340 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:10.434475 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:10.445062 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:10.465273 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:10.505503 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:10.585754 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:10.746090 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:11.066592 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:11.706734 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:12.986964 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:15.547173 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-253837 --static-ip=192.168.200.200: (34.029274824s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-253837 ip
helpers_test.go:175: Cleaning up "static-ip-253837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-253837
E0717 23:03:20.668153 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-253837: (2.025367426s)
--- PASS: TestKicStaticIP (36.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-147351 --driver=docker  --container-runtime=docker
E0717 23:03:30.908356 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:03:51.389287 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-147351 --driver=docker  --container-runtime=docker: (34.121749379s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-150059 --driver=docker  --container-runtime=docker
E0717 23:04:11.108075 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:04:29.109692 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-150059 --driver=docker  --container-runtime=docker: (35.978513787s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-147351
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
E0717 23:04:32.350058 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-150059
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-150059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-150059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-150059: (2.198632953s)
helpers_test.go:175: Cleaning up "first-147351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-147351
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-147351: (2.126088875s)
--- PASS: TestMinikubeProfile (75.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-217480 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-217480 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.028361116s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-217480 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-219670 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-219670 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.930496509s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-219670 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-217480 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-217480 --alsologtostderr -v=5: (1.513162235s)
--- PASS: TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-219670 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-219670
E0717 23:04:56.793646 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-219670: (1.243980439s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-219670
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-219670: (7.203609675s)
--- PASS: TestMountStart/serial/RestartStopped (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-219670 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-998957 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0717 23:05:54.270972 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-998957 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.323156529s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-998957 -- rollout status deployment/busybox: (2.833831462s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-bd7rd -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-xnz5w -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-bd7rd -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-xnz5w -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-bd7rd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-xnz5w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-bd7rd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-bd7rd -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-xnz5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-998957 -- exec busybox-67b7f59bb-xnz5w -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-998957 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-998957 -v 3 --alsologtostderr: (21.080265175s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.83s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp testdata/cp-test.txt multinode-998957:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1018762893/001/cp-test_multinode-998957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957:/home/docker/cp-test.txt multinode-998957-m02:/home/docker/cp-test_multinode-998957_multinode-998957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m02 "sudo cat /home/docker/cp-test_multinode-998957_multinode-998957-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957:/home/docker/cp-test.txt multinode-998957-m03:/home/docker/cp-test_multinode-998957_multinode-998957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m03 "sudo cat /home/docker/cp-test_multinode-998957_multinode-998957-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp testdata/cp-test.txt multinode-998957-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1018762893/001/cp-test_multinode-998957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957-m02:/home/docker/cp-test.txt multinode-998957:/home/docker/cp-test_multinode-998957-m02_multinode-998957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957 "sudo cat /home/docker/cp-test_multinode-998957-m02_multinode-998957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957-m02:/home/docker/cp-test.txt multinode-998957-m03:/home/docker/cp-test_multinode-998957-m02_multinode-998957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m03 "sudo cat /home/docker/cp-test_multinode-998957-m02_multinode-998957-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp testdata/cp-test.txt multinode-998957-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1018762893/001/cp-test_multinode-998957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957-m03:/home/docker/cp-test.txt multinode-998957:/home/docker/cp-test_multinode-998957-m03_multinode-998957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957 "sudo cat /home/docker/cp-test_multinode-998957-m03_multinode-998957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 cp multinode-998957-m03:/home/docker/cp-test.txt multinode-998957-m02:/home/docker/cp-test_multinode-998957-m03_multinode-998957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 ssh -n multinode-998957-m02 "sudo cat /home/docker/cp-test_multinode-998957-m03_multinode-998957-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-998957 node stop m03: (1.269341072s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-998957 status: exit status 7 (615.516312ms)

                                                
                                                
-- stdout --
	multinode-998957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-998957-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-998957-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr: exit status 7 (574.073794ms)

                                                
                                                
-- stdout --
	multinode-998957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-998957-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-998957-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 23:07:39.437868 1497320 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:07:39.438050 1497320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:07:39.438064 1497320 out.go:309] Setting ErrFile to fd 2...
	I0717 23:07:39.438069 1497320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:07:39.438438 1497320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	I0717 23:07:39.438638 1497320 out.go:303] Setting JSON to false
	I0717 23:07:39.438793 1497320 notify.go:220] Checking for updates...
	I0717 23:07:39.439729 1497320 mustload.go:65] Loading cluster: multinode-998957
	I0717 23:07:39.440545 1497320 config.go:182] Loaded profile config "multinode-998957": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 23:07:39.440578 1497320 status.go:255] checking status of multinode-998957 ...
	I0717 23:07:39.441743 1497320 cli_runner.go:164] Run: docker container inspect multinode-998957 --format={{.State.Status}}
	I0717 23:07:39.459584 1497320 status.go:330] multinode-998957 host status = "Running" (err=<nil>)
	I0717 23:07:39.459611 1497320 host.go:66] Checking if "multinode-998957" exists ...
	I0717 23:07:39.459958 1497320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-998957
	I0717 23:07:39.484961 1497320 host.go:66] Checking if "multinode-998957" exists ...
	I0717 23:07:39.485287 1497320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 23:07:39.485354 1497320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-998957
	I0717 23:07:39.521057 1497320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34406 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/multinode-998957/id_rsa Username:docker}
	I0717 23:07:39.615306 1497320 ssh_runner.go:195] Run: systemctl --version
	I0717 23:07:39.621315 1497320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:07:39.636598 1497320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:07:39.715248 1497320 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-17 23:07:39.703946494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:07:39.716044 1497320 kubeconfig.go:92] found "multinode-998957" server: "https://192.168.58.2:8443"
	I0717 23:07:39.716120 1497320 api_server.go:166] Checking apiserver status ...
	I0717 23:07:39.716172 1497320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:07:39.730239 1497320 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2154/cgroup
	I0717 23:07:39.741880 1497320 api_server.go:182] apiserver freezer: "8:freezer:/docker/57e1a3035b505703fc02bcf7d776b11fb74ce72aaf2be7577d7a667b6e850e6c/kubepods/burstable/podf4f43ac79110a99d27cd615ca62e1c66/b378a8eb7f2faa17f656c2071eaa6ed2b2a4ebdd4f84e874e4f2fa451b6dbec7"
	I0717 23:07:39.741959 1497320 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/57e1a3035b505703fc02bcf7d776b11fb74ce72aaf2be7577d7a667b6e850e6c/kubepods/burstable/podf4f43ac79110a99d27cd615ca62e1c66/b378a8eb7f2faa17f656c2071eaa6ed2b2a4ebdd4f84e874e4f2fa451b6dbec7/freezer.state
	I0717 23:07:39.753080 1497320 api_server.go:204] freezer state: "THAWED"
	I0717 23:07:39.753109 1497320 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 23:07:39.762175 1497320 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 23:07:39.762223 1497320 status.go:421] multinode-998957 apiserver status = Running (err=<nil>)
	I0717 23:07:39.762239 1497320 status.go:257] multinode-998957 status: &{Name:multinode-998957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 23:07:39.762260 1497320 status.go:255] checking status of multinode-998957-m02 ...
	I0717 23:07:39.762565 1497320 cli_runner.go:164] Run: docker container inspect multinode-998957-m02 --format={{.State.Status}}
	I0717 23:07:39.781365 1497320 status.go:330] multinode-998957-m02 host status = "Running" (err=<nil>)
	I0717 23:07:39.781390 1497320 host.go:66] Checking if "multinode-998957-m02" exists ...
	I0717 23:07:39.781697 1497320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-998957-m02
	I0717 23:07:39.802850 1497320 host.go:66] Checking if "multinode-998957-m02" exists ...
	I0717 23:07:39.803159 1497320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 23:07:39.803203 1497320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-998957-m02
	I0717 23:07:39.823591 1497320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34411 SSHKeyPath:/home/jenkins/minikube-integration/16899-1384661/.minikube/machines/multinode-998957-m02/id_rsa Username:docker}
	I0717 23:07:39.919138 1497320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:07:39.932759 1497320 status.go:257] multinode-998957-m02 status: &{Name:multinode-998957-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 23:07:39.932790 1497320 status.go:255] checking status of multinode-998957-m03 ...
	I0717 23:07:39.933132 1497320 cli_runner.go:164] Run: docker container inspect multinode-998957-m03 --format={{.State.Status}}
	I0717 23:07:39.952936 1497320 status.go:330] multinode-998957-m03 host status = "Stopped" (err=<nil>)
	I0717 23:07:39.952956 1497320 status.go:343] host is not running, skipping remaining checks
	I0717 23:07:39.952963 1497320 status.go:257] multinode-998957-m03 status: &{Name:multinode-998957-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-998957 node start m03 --alsologtostderr: (13.627943045s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-998957
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-998957
E0717 23:08:10.424339 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-998957: (22.742204197s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-998957 --wait=true -v=8 --alsologtostderr
E0717 23:08:38.111396 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:09:11.108335 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:09:29.109173 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-998957 --wait=true -v=8 --alsologtostderr: (1m38.955776547s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-998957
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-998957 node delete m03: (4.566149049s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-998957 stop: (21.692748447s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-998957 status: exit status 7 (96.120538ms)

                                                
                                                
-- stdout --
	multinode-998957
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-998957-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr: exit status 7 (91.533071ms)

                                                
                                                
-- stdout --
	multinode-998957
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-998957-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 23:10:23.536541 1513274 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:10:23.536738 1513274 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:10:23.536770 1513274 out.go:309] Setting ErrFile to fd 2...
	I0717 23:10:23.536792 1513274 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:10:23.537115 1513274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1384661/.minikube/bin
	I0717 23:10:23.537333 1513274 out.go:303] Setting JSON to false
	I0717 23:10:23.537456 1513274 mustload.go:65] Loading cluster: multinode-998957
	I0717 23:10:23.537507 1513274 notify.go:220] Checking for updates...
	I0717 23:10:23.537893 1513274 config.go:182] Loaded profile config "multinode-998957": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 23:10:23.537932 1513274 status.go:255] checking status of multinode-998957 ...
	I0717 23:10:23.538446 1513274 cli_runner.go:164] Run: docker container inspect multinode-998957 --format={{.State.Status}}
	I0717 23:10:23.558838 1513274 status.go:330] multinode-998957 host status = "Stopped" (err=<nil>)
	I0717 23:10:23.558859 1513274 status.go:343] host is not running, skipping remaining checks
	I0717 23:10:23.558865 1513274 status.go:257] multinode-998957 status: &{Name:multinode-998957 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 23:10:23.558909 1513274 status.go:255] checking status of multinode-998957-m02 ...
	I0717 23:10:23.559205 1513274 cli_runner.go:164] Run: docker container inspect multinode-998957-m02 --format={{.State.Status}}
	I0717 23:10:23.577598 1513274 status.go:330] multinode-998957-m02 host status = "Stopped" (err=<nil>)
	I0717 23:10:23.577619 1513274 status.go:343] host is not running, skipping remaining checks
	I0717 23:10:23.577626 1513274 status.go:257] multinode-998957-m02 status: &{Name:multinode-998957-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-998957 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0717 23:10:34.154296 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-998957 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.967840802s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-998957 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-998957
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-998957-m02 --driver=docker  --container-runtime=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-998957-m02 --driver=docker  --container-runtime=docker: exit status 14 (97.23405ms)

                                                
                                                
-- stdout --
	* [multinode-998957-m02] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-998957-m02' is duplicated with machine name 'multinode-998957-m02' in profile 'multinode-998957'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-998957-m03 --driver=docker  --container-runtime=docker
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-998957-m03 --driver=docker  --container-runtime=docker: (41.054763925s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-998957
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-998957: exit status 80 (353.506885ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-998957
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-998957-m03 already exists in multinode-998957-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-998957-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-998957-m03: (2.18748935s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.75s)

                                                
                                    
x
+
TestPreload (168.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-255877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0717 23:13:10.424359 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-255877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m17.781272397s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-255877 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-255877 image pull gcr.io/k8s-minikube/busybox: (1.461470805s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-255877
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-255877: (10.959141583s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-255877 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0717 23:14:11.108229 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:14:29.109591 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-255877 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m15.958773271s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-255877 image list
helpers_test.go:175: Cleaning up "test-preload-255877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-255877
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-255877: (2.320967456s)
--- PASS: TestPreload (168.71s)

                                                
                                    
x
+
TestScheduledStopUnix (109.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-606165 --memory=2048 --driver=docker  --container-runtime=docker
E0717 23:15:52.153861 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-606165 --memory=2048 --driver=docker  --container-runtime=docker: (36.010861939s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606165 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-606165 -n scheduled-stop-606165
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606165 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606165 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-606165 -n scheduled-stop-606165
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-606165
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606165 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-606165
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-606165: exit status 7 (74.433332ms)

                                                
                                                
-- stdout --
	scheduled-stop-606165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-606165 -n scheduled-stop-606165
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-606165 -n scheduled-stop-606165: exit status 7 (74.566296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-606165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-606165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-606165: (1.696050535s)
--- PASS: TestScheduledStopUnix (109.49s)

                                                
                                    
x
+
TestSkaffold (112.61s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2849599530 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-166225 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-166225 --memory=2600 --driver=docker  --container-runtime=docker: (32.861910385s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2849599530 run --minikube-profile skaffold-166225 --kube-context skaffold-166225 --status-check=true --port-forward=false --interactive=false
E0717 23:18:10.424263 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2849599530 run --minikube-profile skaffold-166225 --kube-context skaffold-166225 --status-check=true --port-forward=false --interactive=false: (1m4.760286548s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6d96596875-l8s4h" [dee9f2c2-8074-48a0-986f-ea2e34beb0d4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.031376966s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6dbfc5bd46-vvpxr" [c8bec0f2-356f-41f2-9b03-f53f0c27b844] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011321947s
helpers_test.go:175: Cleaning up "skaffold-166225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-166225
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-166225: (2.897527119s)
--- PASS: TestSkaffold (112.61s)

                                                
                                    
x
+
TestInsufficientStorage (11.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-353712 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0717 23:19:11.108387 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-353712 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.006582436s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"976d02c9-13b0-4863-911c-997ec3636e23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-353712] minikube v1.31.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0e45bd9-9cea-4da5-9590-15075776c00a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"eb2f8ef8-8881-4cd9-b54d-eba767543386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8616398d-17dd-4530-9a65-569cbae77deb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig"}}
	{"specversion":"1.0","id":"a3d73731-1e8a-4a03-8d2f-12ff6f4e4f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube"}}
	{"specversion":"1.0","id":"d634dff6-9b3e-461e-b33f-073bd92bbad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4f089b1b-850e-46c7-a9f3-92949e545292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"31072421-019f-46b1-a3d1-9d84669e39cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6547e27a-ff66-4492-a232-d2d8181ea920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"105fa5f8-ec37-4762-959f-09c8a318f0be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"46261775-db12-463e-a8fa-85493007a54e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2c7b22c4-bd06-442c-be69-513f490419f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-353712 in cluster insufficient-storage-353712","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8e7d8ae-d7ed-4095-8b47-830b1432845d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"328847d7-da42-4c4d-a68d-356bea2bb617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ddd833f-b0a2-4832-bf7d-7e6f12f175c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-353712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-353712 --output=json --layout=cluster: exit status 7 (320.612199ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-353712","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-353712","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 23:19:18.237943 1550552 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-353712" does not appear in /home/jenkins/minikube-integration/16899-1384661/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-353712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-353712 --output=json --layout=cluster: exit status 7 (319.390395ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-353712","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-353712","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 23:19:18.557960 1550605 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-353712" does not appear in /home/jenkins/minikube-integration/16899-1384661/kubeconfig
	E0717 23:19:18.570627 1550605 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/insufficient-storage-353712/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-353712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-353712
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-353712: (1.751189856s)
--- PASS: TestInsufficientStorage (11.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (124.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.2140521761.exe start -p running-upgrade-319816 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0717 23:23:55.967555 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:55.973372 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:55.983602 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:56.004928 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:56.045676 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:56.128950 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:56.289336 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:56.609705 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:57.250442 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:23:58.530633 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:24:01.091675 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:24:06.212122 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:24:11.108355 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.2140521761.exe start -p running-upgrade-319816 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m5.12168703s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-319816 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-319816 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.71754277s)
helpers_test.go:175: Cleaning up "running-upgrade-319816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-319816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-319816: (2.345361678s)
--- PASS: TestRunningBinaryUpgrade (124.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (151.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m14.380307891s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-915922
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-915922: (11.489653361s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-915922 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-915922 status --format={{.Host}}: exit status 7 (108.130262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.924101369s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-915922 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (124.233775ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-915922] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-915922
	    minikube start -p kubernetes-upgrade-915922 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9159222 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-915922 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-915922 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.996721563s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-915922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-915922
E0717 23:23:10.424301 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-915922: (3.044320718s)
--- PASS: TestKubernetesUpgrade (151.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (196.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.1461249458.exe start -p missing-upgrade-387152 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.1461249458.exe start -p missing-upgrade-387152 --memory=2200 --driver=docker  --container-runtime=docker: (1m57.914128386s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-387152
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-387152: (1.306061342s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-387152
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-387152 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-387152 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m13.98381314s)
helpers_test.go:175: Cleaning up "missing-upgrade-387152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-387152
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-387152: (2.660948058s)
--- PASS: TestMissingContainerUpgrade (196.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-107220 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-107220 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (95.94386ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-107220] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1384661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1384661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-107220 --driver=docker  --container-runtime=docker
E0717 23:19:29.110467 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:19:33.472207 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-107220 --driver=docker  --container-runtime=docker: (46.60253354s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-107220 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-107220 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-107220 --no-kubernetes --driver=docker  --container-runtime=docker: (5.828778626s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-107220 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-107220 status -o json: exit status 2 (379.900949ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-107220","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-107220
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-107220: (1.975731357s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-107220 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-107220 --no-kubernetes --driver=docker  --container-runtime=docker: (10.325542388s)
--- PASS: TestNoKubernetes/serial/Start (10.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-107220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-107220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (501.861069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-107220
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-107220: (1.374187874s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-107220 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-107220 --driver=docker  --container-runtime=docker: (8.439676386s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-107220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-107220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (384.860964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (108.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.1150204880.exe start -p stopped-upgrade-710710 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.1150204880.exe start -p stopped-upgrade-710710 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m1.615564825s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.1150204880.exe -p stopped-upgrade-710710 stop
E0717 23:24:16.453108 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.1150204880.exe -p stopped-upgrade-710710 stop: (2.583484838s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-710710 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0717 23:24:29.109735 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:24:36.933788 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-710710 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.247192224s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (108.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-710710
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-710710: (2.414395197s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.41s)

                                                
                                    
x
+
TestPause/serial/Start (77.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-179432 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0717 23:25:17.894625 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-179432 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m17.111836858s)
--- PASS: TestPause/serial/Start (77.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-179432 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0717 23:26:39.815296 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-179432 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.917393978s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-179432 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-179432 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-179432 --output=json --layout=cluster: exit status 2 (428.726521ms)

                                                
                                                
-- stdout --
	{"Name":"pause-179432","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-179432","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-179432 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-179432 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-179432 --alsologtostderr -v=5: (1.101728279s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-179432 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-179432 --alsologtostderr -v=5: (2.447028465s)
--- PASS: TestPause/serial/DeletePaused (2.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-179432
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-179432: exit status 1 (22.067955ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-179432: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-270503 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0717 23:29:11.108673 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:29:23.655643 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:29:29.108994 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-270503 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m16.701574762s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-270503 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10a3b169-78b6-4858-9da3-b66a76d43b76] Pending
helpers_test.go:344: "busybox" [10a3b169-78b6-4858-9da3-b66a76d43b76] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10a3b169-78b6-4858-9da3-b66a76d43b76] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.038958s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-270503 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-270503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-270503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.356015218s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-270503 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-810055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-810055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (1m16.002812555s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-270503 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-270503 --alsologtostderr -v=3: (11.776216449s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-270503 -n old-k8s-version-270503
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-270503 -n old-k8s-version-270503: exit status 7 (110.942903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-270503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (448.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-270503 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0717 23:32:32.154098 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-270503 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m28.267777179s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-270503 -n old-k8s-version-270503
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (448.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-810055 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6ae4b77e-7e94-4049-8a1f-4937c5ffa63b] Pending
helpers_test.go:344: "busybox" [6ae4b77e-7e94-4049-8a1f-4937c5ffa63b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6ae4b77e-7e94-4049-8a1f-4937c5ffa63b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.025860844s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-810055 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-810055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-810055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107427889s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-810055 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-810055 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-810055 --alsologtostderr -v=3: (11.095087136s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-810055 -n no-preload-810055
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-810055 -n no-preload-810055: exit status 7 (99.209753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-810055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (342s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-810055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
E0717 23:33:10.423906 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:33:55.966993 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:34:11.108246 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:34:29.109624 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:36:13.472756 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:38:10.424256 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-810055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (5m41.509281271s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-810055 -n no-preload-810055
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (342.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6lrlx" [efec1df5-1418-4f0e-9388-99502425eae3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6lrlx" [efec1df5-1418-4f0e-9388-99502425eae3] Running
E0717 23:38:55.966956 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.03047503s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6lrlx" [efec1df5-1418-4f0e-9388-99502425eae3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007338088s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-810055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-810055 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-810055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-810055 -n no-preload-810055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-810055 -n no-preload-810055: exit status 2 (368.32791ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-810055 -n no-preload-810055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-810055 -n no-preload-810055: exit status 2 (360.538548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-810055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-810055 -n no-preload-810055
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-810055 -n no-preload-810055
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2z82x" [6915f128-2509-44d5-b1d9-cd525e5da774] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025164166s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-587590 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
E0717 23:39:11.108193 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-587590 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (1m15.069312181s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2z82x" [6915f128-2509-44d5-b1d9-cd525e5da774] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.078234263s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-270503 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-270503 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-270503 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-270503 -n old-k8s-version-270503
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-270503 -n old-k8s-version-270503: exit status 2 (418.456528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-270503 -n old-k8s-version-270503
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-270503 -n old-k8s-version-270503: exit status 2 (387.920409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-270503 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-270503 -n old-k8s-version-270503
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-270503 -n old-k8s-version-270503
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-319529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
E0717 23:39:29.109580 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:40:19.016062 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-319529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (1m12.771150818s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-587590 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9739a59f-f841-4d4d-93d6-e9110f81086a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9739a59f-f841-4d4d-93d6-e9110f81086a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.054967293s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-587590 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-587590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-587590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.741070998s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-587590 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-587590 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-587590 --alsologtostderr -v=3: (11.035274252s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-319529 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9aff5cfc-a2e7-496f-9c89-32fa8dd6b2dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9aff5cfc-a2e7-496f-9c89-32fa8dd6b2dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.025673525s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-319529 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-587590 -n embed-certs-587590
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-587590 -n embed-certs-587590: exit status 7 (75.322752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-587590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (351.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-587590 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-587590 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (5m51.097160423s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-587590 -n embed-certs-587590
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (351.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-319529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-319529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.594776888s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-319529 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-319529 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-319529 --alsologtostderr -v=3: (10.961935286s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529: exit status 7 (74.358556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-319529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-319529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
E0717 23:41:16.840971 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:16.846228 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:16.856478 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:16.876708 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:16.917135 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:16.998158 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:17.158464 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:17.478961 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:18.120136 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:19.401243 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:21.962020 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:27.082719 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:37.323334 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:41:57.803540 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:42:38.763791 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:42:43.495657 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:43.501002 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:43.511295 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:43.531624 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:43.571915 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:43.652182 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:43.812574 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:44.134202 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:44.775090 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:46.056207 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:48.616462 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:42:53.737608 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:43:03.977762 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:43:10.424111 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:43:24.457972 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:43:54.155845 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:43:55.967447 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:44:00.684335 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:44:05.418468 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:44:11.108315 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:44:29.109672 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:45:27.339603 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:46:16.840688 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-319529 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (5m53.015482073s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9rb6j" [4208fff7-284d-416b-bc44-c1734afb5bfb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 23:46:44.524991 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9rb6j" [4208fff7-284d-416b-bc44-c1734afb5bfb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.024209628s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9rb6j" [4208fff7-284d-416b-bc44-c1734afb5bfb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009333738s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-587590 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-s9vww" [c84bf41b-37ee-48ad-af35-6a73e8b1408e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-s9vww" [c84bf41b-37ee-48ad-af35-6a73e8b1408e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.023938466s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-587590 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-587590 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-587590 --alsologtostderr -v=1: (1.133181643s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-587590 -n embed-certs-587590
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-587590 -n embed-certs-587590: exit status 2 (464.532725ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-587590 -n embed-certs-587590
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-587590 -n embed-certs-587590: exit status 2 (455.862639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-587590 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-587590 --alsologtostderr -v=1: (1.021599384s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-587590 -n embed-certs-587590
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-587590 -n embed-certs-587590
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-131259 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-131259 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (54.168640804s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-s9vww" [c84bf41b-37ee-48ad-af35-6a73e8b1408e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009287683s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-319529 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-319529 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-319529 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-319529 --alsologtostderr -v=1: (1.018768848s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529: exit status 2 (519.708392ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529: exit status 2 (449.28999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-319529 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-319529 -n default-k8s-diff-port-319529
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0717 23:47:43.496525 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m9.452968286s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-131259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-131259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.744039861s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-131259 --alsologtostderr -v=3
E0717 23:48:10.424175 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
E0717 23:48:11.179952 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-131259 --alsologtostderr -v=3: (11.411862774s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-131259 -n newest-cni-131259
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-131259 -n newest-cni-131259: exit status 7 (79.552617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-131259 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-131259 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-131259 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.3: (36.454827055s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-131259 -n newest-cni-131259
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-t24jh" [44779910-65cb-42a3-910b-5ef3c1b51247] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-t24jh" [44779910-65cb-42a3-910b-5ef3c1b51247] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.007933564s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-131259 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-131259 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-131259 --alsologtostderr -v=1: (1.093747312s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-131259 -n newest-cni-131259
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-131259 -n newest-cni-131259: exit status 2 (494.651232ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-131259 -n newest-cni-131259
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-131259 -n newest-cni-131259: exit status 2 (467.227046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-131259 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-131259 -n newest-cni-131259
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-131259 -n newest-cni-131259
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.71s)
E0717 23:57:00.569248 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m9.66234854s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0717 23:49:12.154306 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
E0717 23:49:29.109680 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/functional-034372/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m26.935641165s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w2b7f" [9b156496-ccd1-4425-b08c-cae4c14d1aa3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.047873516s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hzszb" [1b47ea4e-1941-431b-ae73-46fc1ba969bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hzszb" [1b47ea4e-1941-431b-ae73-46fc1ba969bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.021909182s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ctggb" [85fedf00-08aa-4e9d-9613-e58539799436] Running
E0717 23:50:39.664974 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:39.670228 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:39.681297 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:39.702604 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:39.743959 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:39.825008 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:39.985115 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:40.306073 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:40.946556 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:50:42.227097 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.038134028s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-441967 replace --force -f testdata/netcat-deployment.yaml
E0717 23:50:44.787234 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fdb7c" [ce1cf067-eeb3-47f6-80d2-a787dff1de8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 23:50:49.907938 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-fdb7c" [ce1cf067-eeb3-47f6-80d2-a787dff1de8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.009689113s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m13.424076321s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0717 23:51:00.149103 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (93.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0717 23:52:01.590992 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m33.980633872s)
--- PASS: TestNetworkPlugins/group/false/Start (93.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-dgf66" [3dfbbf68-772d-48b8-887a-800df5b3a873] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-dgf66" [3dfbbf68-772d-48b8-887a-800df5b3a873] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.022053066s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0717 23:52:43.495768 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/no-preload-810055/client.crt: no such file or directory
E0717 23:52:53.473438 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m30.780629433s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-8zbvt" [341e2b73-0841-4fa3-8272-6e2784dbc2f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-8zbvt" [341e2b73-0841-4fa3-8272-6e2784dbc2f6] Running
E0717 23:53:10.424645 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/ingress-addon-legacy-539717/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.03249704s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0717 23:53:42.028324 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/auto-441967/client.crt: no such file or directory
E0717 23:53:52.268693 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/auto-441967/client.crt: no such file or directory
E0717 23:53:55.967743 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/skaffold-166225/client.crt: no such file or directory
E0717 23:54:11.108338 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/addons-534909/client.crt: no such file or directory
E0717 23:54:12.749078 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/auto-441967/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m5.792228691s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rwdq5" [c7589665-016c-4837-8e7e-7dfd970d6b16] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rwdq5" [c7589665-016c-4837-8e7e-7dfd970d6b16] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.013588339s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lqdnw" [7b6d9c6b-ce49-4257-bd2d-851a9f496a1b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.039866121s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fq9jh" [89937b1e-262b-4062-a920-910d2ca5dc96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fq9jh" [89937b1e-262b-4062-a920-910d2ca5dc96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.008790793s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0717 23:54:53.710139 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/auto-441967/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m29.570344759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0717 23:55:06.933000 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
E0717 23:55:06.940959 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
E0717 23:55:06.951174 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
E0717 23:55:06.971384 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
E0717 23:55:07.011660 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0717 23:55:07.092974 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
E0717 23:55:07.254042 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (57.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0717 23:55:38.643476 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:38.648731 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:38.658967 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:38.679174 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:38.719421 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:38.799696 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:38.960040 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:39.280821 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:39.666032 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:55:39.921873 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:41.203066 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:43.766655 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:47.902089 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
E0717 23:55:48.887589 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:55:59.128106 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
E0717 23:56:07.352195 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/default-k8s-diff-port-319529/client.crt: no such file or directory
E0717 23:56:15.630378 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/auto-441967/client.crt: no such file or directory
E0717 23:56:16.840566 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/old-k8s-version-270503/client.crt: no such file or directory
E0717 23:56:19.608693 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/calico-441967/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-441967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (57.563491202s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (57.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qqj82" [1f7221f8-f0be-459e-b704-20b33d92190a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qqj82" [1f7221f8-f0be-459e-b704-20b33d92190a] Running
E0717 23:56:28.862568 1390047 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/kindnet-441967/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.013776852s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-441967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-441967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jbxh4" [fd0f9300-f280-47ba-a1fd-5718f8664549] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jbxh4" [fd0f9300-f280-47ba-a1fd-5718f8664549] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.014215065s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-441967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-441967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.25s)

                                                
                                    

Test skip (24/319)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-120890 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-120890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-120890
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-507014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-507014
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-441967 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-441967" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-1384661/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Jul 2023 23:26:03 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-179432
contexts:
- context:
cluster: pause-179432
extensions:
- extension:
last-update: Mon, 17 Jul 2023 23:26:03 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: context_info
namespace: default
user: pause-179432
name: pause-179432
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-179432
user:
client-certificate: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/pause-179432/client.crt
client-key: /home/jenkins/minikube-integration/16899-1384661/.minikube/profiles/pause-179432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-441967

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-441967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441967"

                                                
                                                
----------------------- debugLogs end: cilium-441967 [took: 5.771750969s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-441967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-441967
--- SKIP: TestNetworkPlugins/group/cilium (5.96s)

                                                
                                    
Copied to clipboard